- The Deep View
- Posts
- ⚙️ What DeepSeek means for the future of America’s AI edge
⚙️ What DeepSeek means for the future of America’s AI edge

Good morning. President Trump has imposed massive tariffs against Canada, Mexico and China, a move that coincides with another round of Big Tech earnings this week (Amazon, Google and AMD).
We can expect some near-term uncertainty and volatility. In this context, it will be interesting to see how the market handles Google and Amazon’s results.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🔭 AI for Good: Classifying Supernovae
👁️🗨️ OpenAI fires back at DeepSeek, launches 03 mini
🏛️ What DeepSeek means for the future of America’s AI edge
AI for Good: Classifying Supernovae

Source: Northwestern University
The news: Every night, the Zwicky Transient Facility (ZTF) uses a wide-field camera to peer into the depths of space. In December, the facility’s astronomical survey had officially classified more than 10,000 supernovae, those flashes of light emitted by dying stars.
The details: The ZTF has been in operation since 2017, but in 2023 got a major boost in the form of machine learning algorithms.
An international team led by researchers at Northwestern University developed something called Bright Transient Survey Bot (BTSbot), an algorithm that enables the automatic detection, identification and classification of supernovae, an enormous time-saver for the astrophysicists involved.
It was trained on more than 1.4 million historical images including confirmed supernovae, in addition to temporarily flaring stars. Since beginning operation, it has found half of the brightest supernovae before a human found them.
Why it matters: The more stellar data scientists have on hand, the better they can understand the lifecycle of stars. And they need a very large sample of supernovae in order to begin deriving insights from that data.

GAIN 1 HOUR EVERY DAY WITH AN AI ASSISTANT
Fyxer AI organizes your inbox, drafts extraordinary emails in your tone of voice, and writes better-than-human meeting notes.
Designed for teams using Gmail, Outlook, Slack, Teams, Google Meet, or Zoom.
OpenAI fires back at DeepSeek, launches 03 mini

Source: OpenAI
The news: Last week, OpenAI made its o3 mini model — first unveiled in December — available in ChatGPT. Free users were granted access to it, something they can activate by clicking the “reason” button in the ChatGPT interface, but Plus and Teams users get a higher rate limit of 150 queries per day and Pro users get unlimited access.
The details: The model costs 55 cents per million cached input tokens and $4.40 per million output tokens, a price point that is significantly cheaper than OpenAI’s other models and, rather conspicuously, is quite competitive with DeepSeek’s R1, which costs 14 cents per million input tokens and $2.19 per million output tokens.
OpenAI’s safety advisory group has classified the model as a “medium risk” model; it’s similar to o1, though it is OpenAI’s first model to achieve a “medium” score of risk related to model autonomy, due simply to its coding capabilities, which “indicate greater potential for self-improvement.” Models ranked “medium” and below can be released. OpenAI has yet to have a model rank higher than “medium.”
The data and technical specifics around the construction of o3 mini remain obscured. All we know is that it, like OpenAI’s other o-series models, was trained using large-scale reinforcement learning to use chain-of-thought (CoT) ‘reasoning,’ in which a model generates a series of intermediate steps leading to its final answer. This, having significantly improved model capabilities, has become a popular approach (employed by DeepSeek, as well).
In testing o3 mini, math professor Daniel Litt said the model is clearly better than o1. “Reminds me of talking to someone who knows a huge amount fairly shallowly, and is exceedingly overconfident. But clearly has passed the threshold of genuine usefulness,” he said, adding that the model appears functional enough to enable a productive back-and-forth that can allow him to find the correct answer to hard questions.
The context: This comes as, according to The Information, paid subscribers to ChatGPT have tripled to more than 15 million, a number that feels relevant in light of OpenAI’s latest $40 billion funding talks.
The energy and environmental costs of these more advanced models remain unclear. However, early research suggests the obvious: despite training efficiencies potentially exhibited by these models, their CoT approach massively drives up the energy intensity of inference (the use of the model). In other words, those seconds or minutes ChatGPT takes to solve your polynomial, or answer that physics equation, equates directly to a surge in energy consumption.

DeepSeek R1 launches on Nebius AI Studio
R1 is now available on Studio for a super-competitive price: $0.8 per 1M input tokens.
If you want to play with it — or go all in by including R1 into inference workloads with privacy in mind — Studio is the best place to do so.
We also added other models: DeepSeek V3, QwQ-32B, Phi-4 and more.


As much as the rise in the data centers required to power AI poses obvious questions about pollution, emissions and energy/water consumption, there’s an untold story behind the data center expansion: the land itself that the developers want to consume.
Speaking of data centers, Amazon is building a few in Mississippi. And, according to documents reviewed by Bloomberg, the cost of their construction has spiked 60% to $16 billion.

Brazil’s push for comprehensive AI regulation (Rest of World).
Google’s AI Super Bowl ad is wrong about cheese (The Verge).
Trump imposes tariffs on Canada, Mexico, and China (Semafor).
Reid Hoffman enters ‘wondrous and terrifying’ world of health care with latest AI startup (CNBC).
AI systems with ‘unacceptable risk’ are now banned in the EU (TechCrunch).

Scite: A tool that, by continuously monitoring hundreds of millions of scholarly sources, aims to be your personal research assistant.
What DeepSeek means for the future of America’s AI edge

Source: Created with AI by The Deep View
It takes a stock market collapse for the underside of the AI world to slip into the mainstream.
That’s what happened last week, when politicians, executives and (seemingly) every news outlet in the world presented a rather fatalistic view of DeepSeek’s launch of R1, a seemingly cheaper ‘reasoning’ model that performs on par with OpenAI’s best work.
By the end of the week, the market had partially recovered from Monday’s losses, a recovery that could continue in light of a report from SemiAnalysis, which estimates that DeepSeek is likely operating a $500 million GPU cluster consisting of relatively state-of-the-art Nvidia chips.
Regardless, the cost of DeepSeek’s API remains cheaper than OpenAI’s, so it seems as though some gains were made in the efficiency side of things, something that’s good for the technical science of AI, and complicated for the precarious, hype-riddled business that has blossomed around the growing tech.
It is representative, according to Songyee Yoon, an AI expert and the founder of the venture capital firm Principal Venture Partners (PVP), of one of the first moments in the ‘AI race’ in which developers were able to “sit down and optimize.”
Though Yoon acknowledged that it’s difficult to know how much veracity DeepSeek’s numbers hold, the approach represents an “engineering promise. It’s something to be celebrated.”
But she told me that this isn’t a “Sputnik moment,” and that it shouldn’t be looked at as one.
“Our engineering today is built upon the progress of humanity. So we cannot say that anything came out of a vacuum, that it’s a (strictly) American technology,” she said. “It’s ours. It should be used for humanity. There are many engineers who aspire to build good technology. We cannot stop them. If we stop exporting chips to China, it gives them a motivation to build their own. That’s what happened with EVs. And now look at their EV industry.”
The race, she said, ought to be abandoned, replaced by global collaboration and the pursuit of open science, something that will benefit everyone.
But that’s unlikely to happen; even as the more open approach demonstrated by DeepSeek is beginning to push and prod the AI industry to bring down its many walls — with Sam Altman recently saying OpenAI was “on the wrong side of history” when it comes to its closed-off approach — the White House is considering further tightening chip-related export restrictions against China, something that comes in conjunction with a series of tariffs that have been imposed against China, Mexico and Canada.
But as it stands right now, with the American leadership worried about losing America’s technological edge, it’s difficult to tell if the idea of chip export restrictions is something worth pursuing at all.
Kumar Garg, president of Renaissance Philanthropy and a veteran of the Obama-era White House’s Office of Science and Technology Policy, told me that it’s hard to tell if the chip export controls are working in the way they were intended, adding that topic is a hard one for private sector analysts to speculate on.
“When I was in the government, we would always say there’s the public version of the answer, and there’s the one the intelligence community would give you,” Garg said. “The key question — in a classified setting — is: what do we know about what top Chinese firms are feeling rate limited by?”
If compute isn’t a real limitation, then the export restrictions likely aren’t working. But if the export restrictions are simply slowing adversarial countries down in a race with no clear finish line, “the very fact that you’re ahead might allow you to stay ahead.”
But to Garg, American leadership in AI ought to involve more than just compute.
Part of it involves international talent. Garg said that if the U.S. “makes it harder for those folks to stay,” the country might well begin to lose some of its edge. Part of it involves infrastructure, a build-out — with enormous environmental and health impacts — that is currently underway, helped along in part by the early efforts of Project Stargate.
And part of that edge involves innovation not just on the technology side, but in its application: “The models are more powerful than our use cases. By a lot. We sort of have a set of canonical examples, but we’re not really investing as a country in the applied use cases for how to take this capability and actually have it make dramatic improvements in our lives,” Garg said.
“I think it’s a big open question. I think right now, all of the attention is on the capability side. How are we on the capability edge, and how do we win the race,” he added. “I think there’s much less work and emphasis … if you say ‘explore’ versus ‘exploit,’ how do we take this capability curve and apply it to important things we care about? Part of my view is that we need to put that on the table, and say that’s actually how we’re going to get the upside on all of this. It’s a yes, and, approach.”
The challenge with this is that it involves what Garg called “bilingual work.” AI experts need to work with domain experts in other fields to figure out the most impactful, most beneficial and least harmful modes and methods of application.
“We’re a free market economy,” he said. “You have to prime the pump on these use cases before the market will come.”


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“The mountain (far right) in the fake image cuts off abruptly. Really well done, but also ‘too perfect’ in the lighting.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on OpenAI’s new valuation talks:
42% of you said that $340 billion number is insane. A third said it’s a bubble. Only 15% think it makes sense.
Number is insane:
“The valuation implies a certain moat of competitiveness - which I cannot readily see. While they have first mover advantage and big-name backers, with the amount of money at stake, competition will find a way to attack their positioning. Invest with caution.”
Number is insane:
“The financial viability of this investment is questionable.”
Have you been using o3 mini? How do you find it? |
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.