- The Deep View
- Posts
- ⚙️ Poolside is chasing AGI through software and new architectures
⚙️ Poolside is chasing AGI through software and new architectures

Good morning. OpenAI’s latest image-generation upgrade gave rise to a social media trend (aren’t those just great?) in which people are posting AI-generated Studio Ghibli versions of their photos.
So, a gentle reminder that Studio Ghibli director Hayao Miyazaki has called GenAI art “an insult to life itself.”
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🚁 AI for Good: Canadian reforestation
🏛️ The legal nuance behind Anthropic’s apparent early court win
👁️🗨️ Poolside is chasing AGI through software and new architectures
AI for Good: Canadian reforestation

Source: Flash Forest
Over the past 10,000 years, the world has lost more than a third of its forests. Half of that loss occurred in the last century.
Roughly 10 million hectares (25 million acres) of forest is cut down each year, and that doesn’t count the more than three million hectares of tree cover that is burned in wildfires each year.
Billions of trees are lost each year, and only about 1.9 billion are planted, according to the UNEP. The problem with this is that we’re losing older trees to chainsaws and wildfires, and older trees are capable of sequestering far more carbon dioxide than the saplings that are being planted in their ashes.
Some scientists, meanwhile, have said that planting one trillion trees would be one of the best ways of mitigating climate change.
As part of its efforts to power reforestation, Canada in 2022 announced a $1.3 million contribution to Flash Forest, a company that combines drones, AI and plant science to dramatically speed up reforestation efforts, specifically in the wake of wildfires.
The team uses data and aerial mapping to identify areas to focus their reforestation efforts. Then, they fly drones over those areas that launch self-sustaining seed pods under the soil.
The pods come loaded up with local tree species to ensure proper geographic biodiversity, and the approach eliminates the need for human planting and tending, drastically speeding up the process, enabling it to operate at a far greater scale.
Through automated manufacturing processes, Flash Forest is able to produce hundreds of thousands of these pods each day, a quicker, less energy intensive approach than growing saplings in a nursery.
The company aims to plant one billion trees by 2028.

Preparing Today for an (Even More) AI-enhanced Tomorrow
What if your automation strategy is already becoming outdated?
“By 2029, 80% of enterprises with mature automation practices will pivot to a consolidated platform that orchestrates business processes and agentic automation.”
According to the latest Gartner® "Predicts 2025" research, “The Future of Automation Is Autonomous”, we're rapidly moving toward AI-enhanced autonomous systems that will forever alter how businesses design, orchestrate, and automate complex processes.
Download the research now and start planning your organization’s strategic response.
Anthropic’s early court win turns on the nature of fair use

Source: Anthropic
A federal judge has denied music publishers’ request for a preliminary injunction that would have prevented Anthropic from training its models on copyrighted song lyrics.
The background: The ruling marks the latest evolution of one of the more significant AI and copyright battles, in which a series of record labels — including Universal Music Group — in 2023 sued Anthropic for copyright infringement, arguing that it had trained its chatbot on copyrighted song lyrics.
The details: Though it’s certainly an early legal win for the Claude developers, the reasoning doesn’t spell doom for the publishers; the judge, noting that a preliminary injunction marks an “extraordinary” form of relief, said that the growth of the AI content licensing market means that the harm here would be “compensable rather than irreparable.”
It would need to be “irreperable” to justify an injunction.
Further, the judge argued that to grant such an injunction would be to rule on whether the training of generative AI models on copyrighted content is — or isn’t — protected by fair use, a move the judge was disinterested in making.
“Here,” she wrote, “it is an open question whether training generative AI models with copyrighted material is infringement or fair use.”
A handful of the major leagues copyright lawsuits — including this one — all hinge, in part, on that open question of fair use; that is, can developers, without notifying or compensating creators, train commercial models on copyrighted material? At this stage, none of the developers are disputing that they have trained models on copyrighted material; they’ve just been arguing that they’re allowed to do so, a confident stance that remains far from clear.
Evidence, perhaps, of the precariousness of their position can be found in the policy proposals prepared and sent by Anthropic, OpenAI and Google, three proposals that called for the legal enshrinement of fair use for GenAI training to keep the U.S. ahead in the race to a hypothetical general intelligence.
The U.S. Copyright Office is preparing a report that will respond to the fair use question as it pertains to AI, though it’s not clear when it will be published.
At the same time, the court ordered Anthropic to provide a wider database of Claude prompt and output records for the plaintiffs to sift through for instances of copyright infringement in Claude output.


Export wars: The U.S. on Tuesday added dozens of Chinese companies to its export blacklist, preventing U.S. firms from supplying them without specific government permits.
An infrastructure bubble: TD Cowen analysts said Wednesday that Microsoft has abandoned data center projects worth 2 gigawatts of electricity over the past six months due to oversupply. This shortly follows a warning from the chairman of Alibaba of a data center bubble, in which “people are investing ahead of the demand that they’re seeing today, but they are projecting much bigger demand,” something that implies the data center buildout — and Nvidia’s performance — might soon take a bit of a hit.

Everything you say to your Echo will be sent to Amazon starting on March 28 (Ars Technica).
DOGE says it needs to know the government's most sensitive data, but can't say why (NPR).
This company is using AI to give people American-sounding accents (The Verge).
OpenAI expects revenue to triple this year (Bloomberg).
Amazon is testing shopping, health assistants as it pushes deeper into generative AI (CNBC).

Invest in Sembly AI, The Future of AI Workplace Tools. 60+ investors have already contributed over $370,000. 1000+ B2B clients, ProductHunt #1 of the day. Join us & help shape the future of work! *
Poolside is chasing AGI through software and new architectures

Poolside founders Eiso Kant (left) and Jason Warner (right).
The first question I asked Jason Warner of Poolside when we sat down at HumanX was simple: how do you define AGI?
See, speaking during the conference, Warner affirmed that Poolside, a startup building AI for software engineering, is “going after AGI.” And unlike the entrenched players, Poolside’s path to AGI “is through software” rather than language.
But AGI, referring to artificial general intelligence, is a fraught term that, for starters, lacks an accepted scientific definition. As such, each developer that’s chasing AGI has its own specific definition of what, exactly, AGI might entail; the contours of these definitions, more so than the technology itself, pave the road to the legitimate, or otherwise, feasibility of making this hypothetical real.
Warner — Poolside’s CEO — told me that, generally speaking, AGI is a “bad term, but it’s also basically shorthand. The way that I think of AGI … is economically viable work that we know, and information work that happens behind the screen.”
This gets easier to define in specific domains, though it’s centered around a capacity for human replacement. And to get to that point, Warner said that a swap needs to happen; right now, software engineering is led by developers and increasingly assisted by generative AI.
Warner’s idea is to eventually switch the two, so that the GenAI model leads, while the developer “becomes what you might call the discernment engine.” This would give way to GenAI models filling junior developer, then senior developer roles.
But the models today “aren't smart enough to do this,” according to Warner.
And while the latest breed of so-called reasoning models — like DeepSeek’s R1 and OpenAI’s o-series — are a step in the right direction, Warner thinks it all has to go a step beyond reasoning and into planning, where reasoning breaks down the process and planning marks out the steps of a given solution.
These things happen outside of each other at the moment; a base model is trained, then “we throw (it) in these reinforcement learning environments” where it is tuned to apply Chain-of-Thought reasoning to its output. Warner said that “you gotta start doing (them) together.”
This recent rise in reasoning models hinges on a massive increase in inference-time compute; a language model is very simply taking a longer and more complex road to its output when applying CoT to ‘reason’ about it. The result, in certain applications, is more robust output alongside a bigger bill (which means higher electrical intensity and more carbon emissions from the data center).
Warner said this is a problem “that we do have to tackle,” but he said that, for many of the firms chasing AGI, “the trillion-dollar company at the other end of the rainbow is so large that you can afford to do a bunch of other things inefficiently. It’s a classic business scaling problem.”
Poolside’s answer to that problem is centered around new architectures.
“We have been experimenting with non-transformer based architecture already, and we will move to those in the future for a set of things,” Warner said. “And obviously that will change how it looks for us from that perspective.”
This push for more advanced automation highlights questions that were raised the moment ChatGPT first went online: what might this all mean for jobs? And specifically in the software engineering space, will engineers just go away?
Warner doesn’t think so. But he does think the world will change dramatically: “I think two things become way more important right away, and that is inquisitiveness and agency,” he said. “If you're someone who wants to be told what to do, it's gonna be a tough world, because we're not gonna push to juniors. We're not going to hand-hold people. That's not going to be the world that exists.”
He said the road to advancement, from junior developer to senior developer, might actually be easier in the future for those high-agency people, since they get to work alongside models trained to rapidly output code. But the model of human apprenticeship and mentoring is one that, he thinks, is going away.
“There's not going to be a world where the humans don't themselves have impact, but they're going to have to live side by side with these things,” he said. “But the people who have that impact are going to be those highly curious, highly inquisitive, critical thinkers.”
A world with AGI will be a tough world for apathetic people, Warner said.
Still, the road to AGI — even in new architectures — remains murky. The propensity of models to confidently output false information has persisted despite every major advancement of the past few years, raising major questions about their reliability in critical environments. This challenge has persisted in AI for coding applications, with reports circulating that highlight the brittleness of AI-produced code, and the time-consuming task of reviewing AI-generated code for bugs and other errors.
It’s unclear that an approach exists or will exist that would eliminate this breed of unpredictable errors, or, perhaps more realistically, enable a system to know when it screws up.
But Warner isn’t interested in timelines to AGI.
He’s interested in high-stakes results.
“We must get to the point where we're dealing with high consequence software, like the rocket systems, the stuff that underwrites the plumbing utilities, the banks. That's what we focus on,” Warner said. “When I have done that for all of these companies, I’ll know I'm in the right direction.”
Poolside last year raised $500 million in funding.
“Focus on the hardest problems in the domain, you can solve all the easy ones,” Warner added. “You focus on the easy ones, you're not sure you can round up.”


Which image is real? |



🤔 Your thought process:
Selected Image 2 (Left):
“Whatever is happening on the lathe in Image 1 looks like a bunch of nonsense.”
Selected Image 2 (Left):
“Well, for one thing, AI sucks at rendering text in images, and in Image 2, Hitachi is correctly spelled on the tool bag in the background. LOL”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on Waymo in DC:
A third of you are excited for it, and a third would rather Waymo stick to the West Coast.
A quarter of you don’t think Waymo will get regulatory approval to operate in D.C.
Will you get rid of your Amazon Alexa because of data privacy concerns? |
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.