• The Deep View
  • Posts
  • ⚙️ Interview: The impact of the DeepSeek effect

⚙️ Interview: The impact of the DeepSeek effect

Good morning. There’s a new Indiana Jones video game out, and instead of using AI to replicate Harrison Ford’s iconic voice, they just … hired a voice actor.

“You don’t need artificial intelligence to steal my soul,” Ford said. “You can already do it for nickels & dimes with good ideas & talent. He did a brilliant job & it didn’t take AI to do it.”

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 🌎 AI for Good: OpenHarvest for farmers

  • 🏛️ Poll: British public wants targeted AI regulation 

  • 👁️‍🗨️ The new paradigm of model distillation

  • 🚨 Interview: The impact of the DeepSeek effect

AI for Good: OpenHarvest for farmers

Source: IBM

The problem: Small farmers in Malawi, Africa, have traditionally relied on what used to be a relatively predictable rotation between rainy and dry seasons to organize and sow their fields. But in the midst of our changing climate, those once predictable seasons aren’t, something that makes farming the land even more challenging. 

What happened: So Heifer International partnered with IBM to develop an open-source tech solution called OpenHarvest, a system that, through artificial intelligence and climate modeling, provides hyperlocalized visual agricultural data and recommendations. The system launched in 2023.  

  • In addition to monitoring soil composition, OpenHarvest maps each participating farm with a set of latitude and longitude points which trigger recommendations based on local weather forecasts and crop growth stages. 

  • In its pilot year, in which around 200 farmers took advantage of the system, IBM said that most farmers saw increased yields, adding that some even doubled or tripled their production. 

Heifer said it plans on expanding the program, and intends to explore a further integration of generative AI systems to introduce “smart” farming to Malawi.

Your Next e-Bike—Up to 60% Off Retail!

Upway is your go-to destination for buying and selling e-bikes.

Explore top-tier brands like Aventon, Specialized, Cannondale, and more—up to 60% off retail prices.

Every bike comes with a warranty and is delivered to your doorstep in just a few days.

Plus, enjoy an extra $150 off your next bike with code THEDEEPVIEW.

Poll: British public wants targeted AI regulation 

Source: Unsplash

The news: 87% of the British public would support a law requiring developers to prove their AI systems are safe before release, according to a new poll conducted by YouGov on behalf of the nonprofit Control AI. A further 60% would like to see a ban on the development of “smarter than human” models, while only 9% said they trust tech CEOs to act in the public interest when it comes to AI regulation. 

The poll was first reported by Time

The details: The January survey of 2,344 British adults comes in the midst of the U.K.’s push for greater integration and fewer regulations, with Prime Minister Keir Starmer recently promising to “mainline AI into the veins of this enterprising nation.”

  • The poll was accompanied by the publication of a statement — “the U.K. can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems” — that was supported by 16 British lawmakers. 

  • The British government said recently that it plans to consult with businesses before creating a regulatory proposal, and remains focused on pushing and driving adoption across the government and its constituents. 

The poll’s findings are incredibly similar to the results of extensive polling of U.S. voters, which has found that the majority of American adults badly want the government to regulate AI, particularly more powerful systems. 

  • Amazon beat earning expectations on Thursday, but weak guidance for the current quarter brought the stock down. The company reported capex of $28 billion, marking a near 100% year-over-year increase.

  • In a new paper, a group of Hugging Face researchers argue that fully autonomous ‘AI Agents’ — “systems capable of writing and executing their own code beyond predefined constraints” — should not be developed. The argument is that “the more control a user cedes, the more risks arise from the system.” A fully autonomous system, then, represents an enormous risk.

  • OpenAI eyeing more data centers in Texas, other states (Bloomberg).

  • An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it (MIT Tech Review).

  • Google is adding AI watermarks to photos manipulated by Magic Editor (The Verge).

  • After DeepSeek leap forward, Russia's Sberbank plans joint AI research with China (Reuters).

  • DeepSeek’s rise shows why China’s top AI talent is skipping Silicon Valley (Rest of World).

Your product roadmap is ambitious—do you have the team to build it?

With top engineers in high demand, hiring the right talent is expensive.

Luckily, we have a network of elite developers, ready to go and at a fraction of the U.S. hiring cost.

Need to scale your team now?

The new paradigm of model distillation

Source: Unsplash

In focus: DeepSeek’s launch of R1 might well represent an inflection point in the AI sector across a number of fronts, the most important of which involves model efficiency. Now, OpenAI has accused DeepSeek of leveraging something called “distillation” to build R1, a process that is, well, distilling throughout the field in a manner that offers the potential to massively shift the dynamics of the current paradigm. 

What’s going on: Model distillation essentially involves using a large model to train a smaller one; the smaller model is trained to “learn” the patterns exhibited on outputs from the larger one. 

Researchers at Stanford and the University of Michigan were recently able to leverage distillation as a means of very cheaply fine-tuning a small ‘reasoning’ model. 

  • The team constructed a dataset of a thousand questions and answers distilled from Google’s Gemini Thinking Experimental model, then fine-tuned a pre-trained off-the-shelf model — a Qwen 2.5 32 billion model — on that dataset, while incorporating something they called “budget forcing,” which refers to a means of controlling the duration of test-time compute. 

  • The resulting model outperformed o1 preview on the competition math benchmark by as much as 27%. Training the model required only 26 minutes and 16 Nvidia H100 chips at a cost of less than $50. 

What it means: The reason DeepSeek freaked the market out to the extent that it did is that the idea of cheap, efficient models — models that don’t require billions of dollars of Nvidia chips, in other words — exists entirely contrary to the entirety of the AI business model, which is based in a concept of massive scale. 

The interesting dynamic now is that model distillation works — and even if OpenAI doesn’t permit it, there are plenty of open models out there that do. But successful distillation relies upon the prior existence of massive models, models that could only have resulted from the early thesis behind the AI race. So, those early billions were necessary; the next injection of investment, which promises to be even larger, seems more questionable. Either way, the paradigm of sheer scale and nothing else seems to be shifting. 

Interview: The impact of the DeepSeek Effect

Source: Created with AI by The Deep View

In just the few short weeks since Chinese firm DeepSeek launched R1, we’ve seen a massive stock market sell-off, a moderate stock market recovery, a series of cheap (or free) model releases intended to be competitive with R1, a broader push toward the idea of open-source, hundreds of billions of dollars worth of commitments from Big Tech companies to continue building out AI infrastructure and an overall evolution of the global “AI Race.” 

This is “the DeepSeek effect,” according to Sean Evins, a partner at Kekst CNC and Meta’s former director of global affairs and public policy. 

The news: Even as the market struggles to deal with questions that DeepSeek pushed to the fore — specifically, around the strength of the AI business model and the timing of a little ROI — that “DeepSeek effect” is spreading to the halls of nervous legislators around the world.

  • In recent weeks, Italy, Taiwan, Australia and South Korea have all prohibited the use of DeepSeek on government devices, due to concerns about data privacy and ownership. 

  • According to a Thursday report from Business Insider, U.S. lawmakers are planning to shortly introduce a bill that would likewise ban DeepSeek’s chatbot from government devices. 

“Because it came from China, I think it opened up questions faster and from so many more places than it likely would have had it been an American company or European company,” Evins told me. 

The U.S. lawmakers, according to BI, cited information from cybersecurity company Feroot’s recent investigation into DeepSeek’s app, which identified “hidden code capable of transmitting user data to China Mobile’s online registry.” 

Beyond the “motivating factor” of the competition that DeepSeek represents, Evins said that the moment raises a lot of questions about the dynamics of this technological race between the U.S. and China. 

“It can't just be we pull the lever of fast innovation. We also have to push the button on the appropriate governance structures to make sure that we're taking care of things that we need to take care of,” he said. “Literacy should be a massive priority in companies, in governments, in particular.” He added that governance — rooted in an exploration of closed-versus-open models and an understanding of data usage and model details — is vitally important; it’s a conversation that, according to Evins, DeepSeek accelerated.

“If we're trying to build a skyscraper here, you can't start by decorating the penthouse. You really have to build the right sort of foundations,” he said. “We're still very much in that phase … we've got some scaffolding, we've got the shell. There's loads of questions about how we're going to get to the top, and does it need to be as tall as we think.”

“A lot of it's going to take some amount of political will,” he added. 

The interesting element to this is that, while governance and regulation are — at best — nascent in the U.S., they aren’t in China. Regulatory oversight, for a number of reasons, was baked into China’s circa 2017 efforts to turn itself into a dominant power in AI. 

  • Though China is still working to develop a broad law to address AI, the country began issuing rules related to AI in 2021. These early regulatory efforts, according to Carnegie fellow and China expert Matt Sheehan, include rules on recommendation algorithms and generative AI. 

  • And though they’re focused on “information control,” the rules also include transparency requirements and workers’ rights protections. 

The launch of DeepSeek proved that America’s current approach to its competition with China — i.e., drop everything and rush to enhance sheer capabilities — isn’t working; R1 performs on par with OpenAI. 

And the ripples that have resulted from that launch, as Rennaisance Philanthropy President Kumar Garg recently told me, indicate that, if the U.S. wants to remain ahead, the country should be innovating in areas that go beyond capability.  

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The blue tint on the buildings in Image 2 does not look realistic. Also, the lens flare in Image 1 appears quite realistic to me.”

Selected Image 2 (Right):

  • “Lions Gate Bridge in BC, looks real enough...”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on AI in war:

32% of you don’t like Google’s policy change. And 32% of you think we need international laws regulating the use of AI in the military.

12% of you aren’t surprised by the change; 8% think its a good thing.

I don’t like this at all:

  • “Warfare doesn't need to be super-intelligent, all this does is endanger smaller, poorer countries that don't have the same resources.”

Not surprising:

  • “It was only a matter of time before it would happen anyway. Whether covertly or otherwise.”

Would you like to see a ban on the development of 'smarter than human' AI systems?

Login or Subscribe to participate in polls.

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.