- The Deep View
- Posts
- ⚙️ The sustainability problem of Jevons’ Paradox
⚙️ The sustainability problem of Jevons’ Paradox

Good morning, and happy Friday.
In case you missed it, check out the latest episode of the podcast, a conversation on the ethics of AI.
Hope you all have a great weekend!
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🌊 AI for Good: Shark Eye
💰 Anthropic’s aggressive roadmap to profitability
🏛️ Elon Musk throws a wrench into Altman’s for-profit conversion plans
🌎 The sustainability problem of Jevons’ Paradox
AI for Good: Shark Eye

Source: SharkEye
Sharks — with a history predating that of the dinosaurs (by about 200 million years) — have survived five mass extinction events, spending the past 450 million years at the top of the oceanic food chain.
But over the past 50 years, their population has declined severely, with some 30% of shark species threatened with extinction, according to the Save Our Seas Foundation. And as we work to better conserve and protect our oceans, the loss of sharks is profound; the predators are key components of marine ecosystems, influencing the behavior of their prey, which, in turn, influences and protects the health of the natural carbon sinks that the ocean is home to.
What happened: SharkEye, a collaborative research effort led by the Benioff Ocean Science Laboratory, has been working to combine artificial intelligence with drone footage to better track sharks in California.
The project conducts regular drone flights over the water to gather video footage that is later processed by a machine learning algorithm trained to detect great white sharks.
The researchers then share these shark sightings with local communities and public safety officials.
Why it matters: This sharing of shark-related data has the dual effect of protecting people from potential shark-infested waters, while simultaneously providing marine biologists with the kind of behavioral information that can be used to inform and enhance conservation and protection efforts.

The ROI Playbook for LLM Success
Turing’s white paper reveals actionable insights on how companies can maximize the ROI of their LLM investments by addressing key challenges in cost, efficiency, and impact.
With this guide, you’ll learn how to:
Identify cost-saving opportunities in LLM deployment
Improve model performance and align with business goals
Unlock measurable value from your AI investments
Anthropic’s aggressive roadmap to profitability

Source: Anthropic
The news: After losing $5.6 billion in 2024, The Information reported that Anthropic expects to hold its losses for 2025 to about $3 billion on revenue of $3.7 billion.
The details: According to sources who reviewed internal documents, Anthropic additionally expects to soon see a dramatic increase in revenue, projecting $34.5 billion in revenue in 2027.
But that number is an “optimistic” projection; Anthropic’s base-case projections are a little more tame: $2 billion in revenue in 2025, $5 billion in 2026 and $12 billion in 2027.
These projections come as Anthropic looks to raise another $2 billion in funding at a $58 billion pre-money valuation. The startup expects to stop burning cash by 2027.
Anthropic didn’t respond to a request for comment regarding the report.
How it stacks up: OpenAI, meanwhile, lost about $5 billion on $3.7 billion in revenue in 2024. And though the company has rosy revenue projections through the end of the decade — $12 billion in ‘25, $26 billion in ‘26, $44 billion in ‘27, $70 billion in ‘28 and $100 billion in ‘29 — it doesn’t expect to actually turn a profit until 2029.
Until that time, OpenAI expects to burn $44 billion between 2023 and 2028.
OpenAI is right now in the process of raising $40 billion in funding at a $300 billion post-money valuation.
The landscape: While corporate spending on AI grew, according to one report, by 500% in 2024 (and Anthropic’s market share right along with it), this all comes at a moment of reckoning for the industry. The launch of DeepSeek’s R1 has sparked an open-source surge, demonstrating the comparable capabilities of cheap or free models, something that threatens the businesses of both startups, which are predicated on selling access to their models.
This also coincides with a report from The Information that Anthropic is planning to release a hybrid model that can handle both reasoning and less complex tasks.

Don’t Trust AI Alone with Your Sales
You wouldn’t trust an AI bot to attend a networking event or to close a critical deal, so stop burning all your leads with generic AI outreach.
Harness the strengths of humans + AI with Bounti– your AI teammate doing all the research and prep work for you, so you can focus on what matters: making connections and closing deals.
In minutes, Bounti gives you a toolkit with everything you need to win target accounts, so you can:
✔️ know your prospects and what they care about
✔️ land your pitch by connecting to buyer business objectives
✔️ and thoughtfully engage them with personalized outreach


Apptronik, a startup selling AI-powered humanoid robots, just closed a $350 million Series A funding round, which featured participation from Google. The money will enable a significant expansion.
OpenAI released a significantly expanded version of its “model spec” document, a document that lays out guidelines for how its models should generally behave. The first version of OpenAI’s model spec was 10 pages long; this newest version clocks in at 63.

DOGE has disregarded data protection and privacy norms. The consequences will be felt years down the line (Fast Company).
The UK’s war on encryption affects all of us (The Verge).
Teens across Asia migrate to Taiwan for promises of semiconductor jobs (Rest of World).
Chip maker EnCharge raises $100M in Series B round, aims to slash AI energy usage (Semafor).
14 news publishers have sued Cohere, alleging massive copyright infringement (News Media Alliance).

Elon Musk throws a wrench into Altman’s for-profit conversion plans

Source: Elon Musk
Elon Musk really doesn’t want OpenAI’s for-profit conversion to go through.
The news: Musk — whose team of investors recently put in a $97.4 billion (unsolicited) bid for OpenAI’s nonprofit arm — said Thursday that he’d rescind the bid, so long as OpenAI stops its pending conversion into a for-profit company and promises to “preserve the charity’s mission,” according to a court filing.
Despite the fact that Sam Altman has said, quite clearly, that OpenAI “is not for sale,” the function of bidding nearly $100 billion for the nonprofit arm could cause some (likely intended) consequences.
See, last year, OpenAI secured $6.6 billion in funding at a $157 billion valuation. The terms of that funding round were predicated on a promise that OpenAI would convert from its current format — a capped for-profit company controlled by the nonprofit that OpenAI started off as — into a for-profit company. According to the terms, investors can ask for their money back if that conversion doesn’t happen within the first two years following the closure of the round.
The conversion is already complicated, a procedure that OpenAI has said will involve its nonprofit arm taking a minority equity stake in the new OpenAI. But the valuation of that stake is the tricky part; The Information reported last year that the nonprofit could be valued at around $40 billion. Since it is a genuine offer, Musk has effectively boosted the valuation of the nonprofit arm, something that further complicates the equity conversion and could impact share dilution, which could leave OpenAI’s investors less than pleased.
It’s the latest battle in a war that has been raging for years, really, between Musk and OpenAI. After co-founding the startup back in 2015 — and donating millions of dollars to the cause — there was a bit of a power struggle, with Musk seeking control of OpenAI. The board resisted, and Musk left in 2018, taking his billions off the table and precipitating OpenAI’s multi-billion-dollar partnership with Microsoft.
Musk sued OpenAI last year, a suit he withdrew in June and then refiled in August. The suit alleges that Altman “intentionally courted and deceived Musk,” and that “in partnership with Microsoft, Altman established an opaque web of for-profit OpenAI affiliates, engaged in rampant self-dealing.”
DeepWater’s Gene Munster doesn’t expect the bid to really change anything at OpenAI.
Altman told CNBC that Musk’s latest move is just an effort to “slow down a competitor.”
The sustainability problem of Jevons’ Paradox

Source: Unsplash
At the height of the DeepSeek shockwave in January, Microsoft CEO Satya Nadella weighed in to reassure investors that the AI bet remains intact. Tweeting out a link to a Wikipedia article on Jevons’ Paradox, Nadella wrote: “Jevons’ paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of.”
This seems to be the strongest hope of the hyperscalers building out their high-cost infrastructure, that efficiency gains will win them more money instead of hurting their bottom line.
But what is Jevons’ Paradox? Proposed in the 1800s by economist William Stanley Jevons, the concept was based on a simple observation: as coal use became more efficient, it was leading to an increase in coal consumption. The hope is that this same pattern will apply to AI; that as it becomes more efficient (and therefore, cheaper), it will become more widespread.
If this does indeed occur, the implications, according to recent research, will be significant in a number of areas, not least of which involve the climate.
The details: The core of this challenge — both economically and environmentally — has to do with the data centers that power AI, which are consuming far more energy than they used to, something that is resulting in a steady increase in data center-related carbon emissions and water consumption.
So, if these efficiency gains that Nadella was referencing do, in fact, lead to higher consumption, the result could be a boost in emissions and energy consumption, rather than the opposite.
“These second-order effects challenge the presumption that purely technical optimizations alone will deliver sufficient climate benefits,” the researchers — including Dr. Sasha Luccioni, Emma Strubell and Kate Crawford — wrote.
A challenge that has been at the core of this push and pull between the rise of AI and its impact on the climate — both for good or ill — involves a persisting lack of industry-wide transparency. The details of the costs of electricity and emissions, though suspected, aren’t known for sure; the climate benefits that AI promises, meanwhile, have yet to materialize in any measurable way, two things that make for a rather hazy picture.
The strides that have been taken to attempt to gain clarity around those costs have thus far dealt with direct tangibles, such as the energy and water costs associated with training and deploying generative models.
This doesn’t consider the full picture, which includes the complete supply chain associated with AI, from the rare minerals needed to build semiconductors to the electronic waste they, in time, become.
But this all goes a step further, according to the researchers. Since the industry of AI is currently driven by a financial system that “rewards rapid growth,” it will only be possible to align AI initiatives with climate goals if that same maneuver also supports business goals. “This structural constraint significantly narrows the scope of AI’s potential as a climate intervention.”
There is a risk, according to the paper, of potential climate solutions being overshadowed by more lucrative pursuits, indicating that a more systemic approach might well be necessary.
“Genuinely climate-aligned AI strategies might require public policy frameworks that penalize unsustainable practices and reward genuinely carbon-negative deployments of AI, and business models that do not hinge on perpetual growth, in order to ensure that increased AI efficiency does not simply spur more consumption.”
“The onus is on the AI industry to ensure technology does not contribute to the problem before producing any future solutions,” they added. “This requires reckoning with AI’s actual impacts, both direct and indirect, measured comprehensively and contextualized socially, economically and environmentally.”

It all comes down to transparency and incentives.
To a degree, the economics and sustainability questions are aligned under the same roof of efficiency. The problem with efficiency is it only takes you so far, and if Jevons’ Paradox holds, the net impact of greater efficiency will solve the cost problem while worsening the climate problem.
The incentives need to shift to make addressing the climate problem a legitimately desirable move for developers in the space.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“The contrail gave this one away.”
Selected Image 2 (Right):
“The minaret-like structures are odd.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on GPT-5:
Just about half of you think OpenAI will, in fact, release GPT-5 this year. 25% of you think that maybe it’ll come in 2026. And 15% of you don’t think OpenAI will ever actually release it.
Something else:
“Since AGI lacks a universally accepted definition, there is a risk that OpenAI or other AI companies might define it in a way that aligns with their current achievements rather than a more ambitious or theoretical standard. However, the more immediate concern should be ensuring AI reliability, interpretability, and alignment with human values, as these challenges impact real-world applications today.”
Do you think Anthropic's (and OpenAI's) revenue roadmaps make sense? |
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.