• The Deep View
  • Posts
  • ⚙️ Exclusive interview: The world’s first open AI platform

⚙️ Exclusive interview: The world’s first open AI platform

Good morning. We have a lot to talk about today. Round one (of year two) of AI earnings is out. But, almost more importantly, I chatted with the founders of a new open-source AI platform, and it’s kind of a big deal.

Happy Thursday everyone.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 📊 AI Earnings Part 1: Meta and Microsoft

  • 💰 AI Earnings Part 2: IBM (and Tesla)

  • 🚨 Bengio releases first international AI safety report

  • 👁️‍🗨️ Exclusive interview: The world’s first open AI platform

AI Earnings Part 1: Meta & Microsoft

Source: Meta

After almost two years in which tech stocks have just kept on going higher, this round of Big Tech earnings feels significant. 

In part, that has to do with the recent launch of DeepSeek’s R1, which led to a lot of tech-related bleeding in the stock market on Monday.

But DeepSeek aside, Big Tech has been spending an insane amount of money building AI, and they’ve yet to see revenue to match. Wall Street has been getting impatient for a while; the rout on Monday is a good example of what happens when you mix a shot of fear into an environment of impatience crossed with massive expectations. 

Nvidia continued its slide on Wednesday, closing the day down about 4%. Microsoft closed down around 1% and Meta closed slightly up, while the S&P 500 was down .4% and the Nasdaq was down .5%. 

First up, Magnificent 7 members Meta and Microsoft. 

Here’s how they did

Meta: 

  • Meta reported $48.39 billion in revenue for the quarter, above analyst expectations of $47.04 billion. 

  • The social media giant reported earnings of $8.02 per share, above expectations of $6.77. 

Meta said it spent $40 billion in capital expenditures in 2024 and confirmed its plans to spend at least $60 billion in capex in 2025, mainly on building AI infrastructure and supporting Meta’s “core business.” 

CEO Mark Zuckerberg mentioned DeepSeek on the conference call, saying that the model’s impact is still too early to tell. “I continue to think that investing very heavily in CapEx and infra is going to be a strategic advantage over time,” he said.

Meta made two things clear, according to Deepwater’s Gene Munster: one, that AI is driving revenue growth, and two, that the “AI hardware trade remains intact.”

Microsoft:

  • Microsoft reported revenue of $69.6 billion, a 12% year-over-year increase that came in above analyst expectations of $68.78 billion. 

  • And Microsoft reported earnings of $3.23 per share, above expectations of $3.11. 

Microsoft’s Intelligent Cloud unit, which includes Azure, brought in $25.5 billion in revenue; Chief Satya Nadella said that Microsoft’s “AI business has surpassed an annual revenue run rate of $13 billion.” 

Microsoft shares slipped in after-hours trading, something Deepwater’s Gene Munster said is because of Azure, which noted a 31% year-over-year growth. “Investors wanted more,” he said. “I see this as an overreaction.”

Microsoft CFO Amy Hood said on the call that, beginning in fiscal year 2026, capital expenditures will begin to slow down.

AI coding agents for engineering and business impact

Sourcegraph's coding agents deliver real engineering productivity gains.

Our code review agent lets your teams define specific rules that align with your standards and processes, rather than trying to replace human judgment.

Using a powerful combination of code search and AI, dev teams can precisely tune these rules and validate them against recent PRs, ensuring a high signal-to-noise ratio.

That's why engineering leaders at companies like Stripe, Uber, and Palo Alto Networks trust Sourcegraph to accelerate their development workflows.

See the impact for yourself - start using Sourcegraph code search & AI today, and join our Feb 6 livestream for early agent access and live demos of real productivity gains.

AI Earnings Part 2: IBM & Tesla

Source: IBM

IBM and Tesla were also up at bat last night. IBM closed the day up 1.3% and spiked as much as 10% in after-hours trading. Tesla closed the day down 2.2% and rose as much as 4% in extended trading. 

Here’s how they did

IBM: 

  • IBM reported revenue of $17.6 billion for the quarter (and $62.8 billion for the year), a relatively flat number that came in roughly in line with analyst expectations. 

  • But the company reported adjusted earnings of $3.92 per share, well above analyst expectations of $3.75. 

Of significant note, IBM’s generative AI book of business (a combination of both bookings and sales) stands at $5 billion since inception, a more than $2 billion increase from the previous quarter. This is made up of a combination of consulting (80%) and software (20%). IBM noted a software sales increase of 10%, its biggest jump in five years. 

Tesla:

  • Tesla reported $25.71 billion in revenue for the last quarter of 2024, a 2% year-over-year bump that came in below analyst expectations of $27.26 billion. 

  • The company reported earnings of 73 cents per share, below expectations of 76 cents. 

Tesla noted that in 2024, it increased AI compute by 400% and clocked three billion supervised FSD miles. The company said that improvements in FSD will “eventually” unlock an “unsupervised” option for customers, adding that it expects its robotaxi business to launch this year in parts of the U.S. 

Elon Musk, who’s been making unrealized self-driving promises for a decade, now, said on the call that Tesla would be “launching unsupervised Full Self-Driving as a paid service” in Austin in June, something he plans to expand across the U.S. by the end of this year.

"When people look back on 2025 and the launch of Unsupervised FSD, they may regard it as the biggest year in Tesla's history,” Musk said, adding that, in 2026, things will go “ballistic.”

  • OpenAI — which has argued that it is fair use to train AI models on all the content on the internet, without compensating the original authors — told the FT that DeepSeek might have trained R1 using outputs from ChatGPT. White House AI advisor David Sacks said there’s a possibility of intellectual property theft …

  • The U.K. House of Lords approved amendments to its proposed Data Bill, which will explicitly subject AI companies to U.K. copyright law, requiring the licensing of copyrighted works for model training. The bill still needs to go through the House of Commons, but creatives took it as a significant win in the fight for copyright protection in the age of AI.

  • Trump officials discuss tighter restrictions on Nvidia’s China sales (Bloomberg).

  • The real DeepSeek revelation: The market doesn’t understand AI (Semafor).

  • India’s open arms approach to AI (Rest of World).

  • Fed holds rates steady, takes less confident view on inflation (CNBC).

  • Microsoft makes DeepSeek’s R1 model available on Azure AI and GitHub (The Verge).

Bengio releases first international AI safety report

Source: Yoshua Bengio

Yoshua Bengio, a Turing Award-winning computer scientist, on Wednesday announced the publication of the first international AI Safety Report, a 300-page report born from a collaboration between more than 100 AI experts around the world. 

The intention behind the report was to create an “AI safety handbook,” a document that tracks the capabilities of AI and the risks those advancements pose. Bengio chaired the report. 

The details: The report breaks things down into three categories: the current capabilities of generative (or general purpose) AI, the risks associated with that, as well as means to mitigate those risks. 

  • Though the details of capability enhancements (and their real-world legitimacy) remain unclear, owing to a lack of transparency and the ineffectiveness of benchmarks, the report sums up the pace of progress quite simply: “A few years ago, the best large language models (LLMs) could rarely produce a coherent paragraph of text. Today, general-purpose AI can write computer programs, generate custom photorealistic images and engage in extended open-ended conversations.”

  • The report classifies risk into three categories: risks from malicious use (cyberattacks, misinformation, identity theft, fraud, abuse, etc.); risks from malfunctions (algorithmic bias and hallucinations) and systemic risks (sustainability, economic inequity, etc). 

The takeaway: The report’s conclusion is anything but conclusive, with the researchers finding that even the short-term future is “remarkably uncertain,” something that Bengio said ought to inspire societies and governments to act. 

AI does not happen to us: choices made by people determine its future,” the report reads. “It will be the decisions of societies and governments on how to navigate this uncertainty that determine which path we will take.”

Exclusive interview: The world’s first open AI platform

Source: Oumi

The internet as we know it today was built on open-source software, a basic idea that baked transparency, peer review and peer improvement into the code that has gone on to shoulder the digital world. 

But, since its inception in 2022, generative artificial intelligence has not followed those same principles. The bulk of generative chatbots — those from Anthropic, OpenAI, Microsoft and Google, for example — are closed-off, meaning data, source code and model weights are intentionally obscured. 

The impact of this closed-off environment is multi-fold. 

  • For one, it’s practically impossible for researchers to study and verify the actual capabilities of the models on offer; since they don’t have access to training data and code, they have to make assumptions when attempting to test everything from reasoning capabilities to energy intensity. 

  • For another, closed models mean improvements can’t come organically from a community that would continue to advance advancements; it all comes down to the big developers, who have a significant financial stake in their proprietary systems. 

Even the developers who claim to be open, such as Meta, aren’t much more open than their peers; Meta’s models are, at best, open weights, but training data and source code remain elusive. Even Google DeepMind’s AlphaFold 3, a model intended to dramatically speed along the drug creation process, is a closed model. 

This conflict between open and closed source spiked again this week in the wake of DeepSeek’s open release of R1. Now, R1 isn’t fully open-sourced — DeepSeek didn’t release the training data or source code used to build the model — but it was enough to inspire Hugging Face to build a fully open-sourced replication of the model.

Entering into this landscape is Oumi, a brand new, fully open AI platform that was created in collaboration with more than a dozen leading institutions, including MIT, Stanford and Carnegie Mellon. It was founded by a team of machine learning researchers and engineers hailing from Google, Apple, Nvidia and Meta, all on a mission “to make AI unconditionally open for all.”

  • Oumi, a public benefit corporation, is launching with $10 million in funding. The platform is designed to support the fully open (training data, code and weights) training, fine-tuning, deployment and evaluation of models. 

  • The idea is to function as an all-in-one platform to support the full lifecycle of an AI model, enabling the kind of globally open collaboration that made software what it was. 

Oumi was born, in part, out of a desire to aid an academic community that has lately been locked out, Manos Koukoumidis, Oumi’s co-founder and CEO, told me. 

All the researchers Koukoumidis spoke to said the same thing: “We can’t be doing research by calling a black box OpenAI API.” And with AI becoming more and more closed, independent researchers “cannot help improve it, audit it, find the issues, make it better. And at the same time, it’s just a waste. Academics (are) an untapped resource. We’re so capable and so willing, but we cannot contribute.” 

And although Oumi’s genesis is in academia, the idea is — like software — to enable a “symbiotic relationship,” opening up the latest results of open-source collaboration to anyone, enterprises included, that wants to leverage them. 

  • This issue of academia becoming locked out of AI has been ongoing for years now; in large part, it has to do with simple economics. Universities can’t compete with the private labs and their billions in funding, hundreds of thousands of GPUs and million-dollar salaries. 

  • Oussama Elachqar, one of Oumi’s co-founders, told me that we reached “a critical inflection point.” He said that open source is good, but it’s not enough; “because we have so many siloes, having open collaboration is the way we can together build the frontier models … in a way that makes sense for everybody.” 

The two founders expect Oumi to inject a much-needed burst of trust into an industry full of blog post releases, no peer reviews and little transparency; OpenAI’s o3 and DeepSeek’s R1 are both good examples of this. They seem capable, but that capability hasn’t been verified, so researchers can’t know if something else was going on behind the scenes to achieve a given score. (OpenAI’s access to the ARC-AGI and Frontier-Math datasets, which came out after the unveil, certainly adds some doubt to the legitimacy of the performance). 

Elachqar said that an important element of Oumi is the all-in-one evaluation component, making it far easier to test models across a variety of benchmarks (tests that will have far more veracity, since training data and source code are a part of it). 

And while there are geopolitical concerns about openly accessible AI, the founders were clear on one point: open AI is safer and more trustworthy than the alternative. Open AI neutralizes the dangers inherent to the ‘AI race,’ that rush to one-up the next leading lab in a benchmark score. 

“The reality is simple,” Elachqar said. “If the U.S. is less open than China, I think that really puts American innovation at risk.” (A prescient statement, considering we chatted a few days before the DeepSeek craze). 

Koukoumidis explained that the plan right now is to build a strong open-source community, and worry about monetization later. He said that Oumi, unlike OpenAI and the like, is not “capital intensive.” 

He added that, if a single company were to “dominate and control AI for all the rest of us, that would lead to an extreme concentration of power … that would be a very bad, very bad outcome for all of us.”

Oumi exists, in part, to prevent that from happening.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 2 (Left):

  • “The branches in the foreground of image 2 gave me the clue.”

Selected Image 1 (Right):

  • “Cloud formation in pic 2 looks unnatural.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on ChatGPT Gov:

73% of the government workers among you do not like this at all; only 27% are excited about it.

And 76% of the non-government workers among you think it’s a great idea; only 24% don’t like it.

I am surprised by the split between government workers and non-government workers here. Very interesting.

I don’t like this:

  • “I work in the mental health field, and support people who are impacted at a higher rate by systemic inequality and oppression. I'm unsurprised, but disappointed and alarmed, by how quickly big tech companies in general are being adopted in the U.S., without the proper oversight/ accountability to protecting data/ privacy/ safety of its citizens in general - especially marginalized populations. Sometimes I wonder how all of these very quick major changes will impact our society's mental health over the next 5-10 years, in ways that we won't recognize til much later.”

I don’t like this:

  • “We are people, we want to be free, we are not numbers, we don't fit in boxes.”

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.