- The Deep View
- Posts
- ⚙️ The morality of artificial warfare
⚙️ The morality of artificial warfare

Good morning. Nvidia pushed the S&P 500 into the green on Wednesday, rallying some 5% — enough to overcome the rough day that Google (down 7%) and AMD (down 6%) shared.
It helps to have a $3 trillion market cap, I guess.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🌱 AI for Good: Programmable plants
💻 Google makes Gemini 2.0 available to all
👁️🗨️ Deep Research and the open-source whiplash
🚁 The morality of artificial warfare
AI for Good: Programmable plants

Source: Heritable Agriculture
Food production, like energy production, is a massive, and massively necessary, industry. And food production, like energy production, has a number of adverse impacts on the environment around it.
The problem: As of 2019, the global food production ecosystem accounted for about a quarter of total greenhouse gas emissions. And that same industry, which uses about half of the world’s habitable land, is depleting groundwater and, through damaging mono-cropping approaches, soil health.
What happened: A new company — Heritable Agriculture — just spun out of Google’s research division, X, with a machine learning platform designed to help biologists better understand — and design — sustainable crops.
According to the company, the team’s algorithms are capable of identifying the function of the specific genomes in a given plant.
By understanding those genomes, biologists can breed plants “with climate-friendly traits for increased yields, lower water requirements and higher carbon storage capacity in roots and soil.”
The team validated the models over the course of several years’ research, in labs and in the field. Now, they’re teaming up with farmers to figure out what adjustments can be made to existing crops to optimize their growth while reducing any negative impacts.
They believe the system can be applied to reforestry efforts, suggesting that their platform can improve the resiliency and health of trees 400 times faster than current methods.

New AI Tools Are Cool - Getting Tracked Isn’t.
AI tools are evolving fast - but so are the ways companies collect and monetize your data.
Before signing up, safeguard your personal info with Cloaked Identities and Data Removal.
Generate unlimited working email IDs & phone numbers to use instead of your real ones
Avoid spam, unwanted tracking, and potential security risks
Remove your personal information from 120+ data brokers
Stay in control of who gets access to your information
Curious to see what info about you is already exposed? Get started with a free risk scan today.
Google makes Gemini 2.0 available to all

Source: Google
The news: A month after unveiling experimental versions of its Gemini 2.0 family of generative AI models, Google on Wednesday made the systems accessible to everyone.
The details: 2.0 Flash costs $.10 per million input tokens and $.40 per million output tokens. Flash-Lite is at $.075 and $.30, respectively. The cost here is significantly cheaper than both DeepSeek and OpenAI.
2.0 Flash is now available through Google’s API, as well as to users of the Gemini app.
Google additionally made available 2.0 Flash-Lite, a new model that the company says is its most cost-efficient one to date, as well as experimental versions of 2.0 Pro and 2.0 Flash Thinking.
All the new models largely outperform the Gemini 1.5 family on a series of benchmark evaluations.
The release shortly follows Google’s earnings report, in which the company reported a miss on Cloud revenue amid promises to spend $75 billion in 2025 on its AI buildout. The stock fell as much as 8% on Wednesday.

Companies that are actually a good fit? The PLAY is LeadTalk. It’s KEY that you don’t overpay for list building. Leadtalk uses AI to analyze your best customers and instantly gives you a list of target accounts based on your ICP.
No complicated setup
No credit card required
Just smarter, faster, account targeting


Amazon is returning to the mathematical basis of symbolic AI as a means of resolving hallucinations, through something it is calling “automated reasoning.”
Google’s head of quantum AI predicted that commercial quantum applications will be here in as little as five years. But, as with AGI predictions, it’s kind of hard to tell.

A reviews site embroiled in AI scandal is back from the dead (The Verge).
Chegg bets big on the AI that nearly broke it (Semafor).
Is Google Maps fatally misleading drivers in India? It’s complicated (Rest of World).
AMD shares drop 7% on disappointing data center revenue (CNBC).
Elon Musk’s secretive government IT takeover, explained (Vox).
The open-source whiplash

Source: Unsplash
The news: 24 hours after OpenAI released its new Deep Research ‘agent,’ researchers at Hugging Face partially replicated the system. Only theirs is open.
The details: The team explained in a blog post that the functionality of these agents is relatively straightforward. They are built around a powerful large language model and tied together with an agentic framework, something that enables the LLM to use tools and browse the web.
So the team set out to slap an agentic framework on top of one of their frontier LLMs, specifically setting it up with a simple text-based web browser and text inspector.
In just those first 24 hours, the team’s open version of Deep Research achieved a score of 55.15% on the public agentic validation benchmark, beating the state-of-the-art for open-source by a healthy margin.
OpenAI’s Deep Research achieved a score of 67%.
The researchers said that Deep Research is probably boosted by the “excellent” web browser OpenAI introduced with Operator; now, they’re going to try to tackle that side of the equation in an attempt to boost open AI to parity with OpenAI’s closed AI.
The landscape: All eyes have been on open-source ever since the market (and general public) realized that DeepSeek’s R1 — with openly available model weights — was able to achieve comparable performance with OpenAI’s closed models.
Sam Altman said during a recent Reddit AMA that OpenAI is discussing a more open approach: “I personally think we have been on the wrong side of history here and need to figure out a different open source strategy; not everyone at OpenAI shares this view, and it's also not our current highest priority.”
The morality of artificial warfare

Source: Unsplash
The ethics of artificial intelligence are broad, to say the least.
But one element of the ethics of AI involves its deployment in warfare, something that has been going on long before this more recent round of generative AI systems. The same algorithms that have come to invisbly run so many aspects of our digital lives have been studied and deployed in militaries around the world.
But this latest round of AI represents a new level of promise. Promise, and risk.
What happened: Google — which was known around 25 years ago for its since-removed motto, “don’t be evil” — removed a pledge from its AI principles page that it would not pursue the application of AI and weaponry.
When this change was noted by Bloomberg, Google pointed the organization to a new blog post, in which Google said that “democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights. And we believe that companies, governments and organizations sharing these values should work together to create AI that protects people, promotes global growth and supports national security.”
Google’s AI principles aren’t gone, they just no longer prevent the company from specifically designing AI-enabled weaponry. The words “responsible AI” are still plastered all over Google’s principles page, alongside promises to deploy models only when the overall benefits are likely to “substantially outweigh the foreseeable risks.” And I suppose that is all a matter of perspective.
Margaret Mitchell, the chief ethics scientist for Hugging Face, told Bloomberg that “it means Google will probably now work on deploying technology directly that can kill people.” Mitchell formerly led Google’s ethical AI team.
In use: The wars in Ukraine and Gaza have seen the deployment of AI tools. Some, like those used by the IDF, were deployed to select bombing targets; in the Ukraine, soldiers have used AI to automate attack drones.
And through it all, reporting has revealed the extent of both Google’s and Microsoft’s cloud relationships with the Israeli military, which included the sale of AI technology.
This pivot toward war isn’t exclusive to Google; OpenAI recently partnered with defense contracter Anduril to develop AI for “national security missions,” a partnership that followed an adjustment to OpenAI’s own policies, which had previously prohibited the use of its tech by militaries.
Then there’s Palantir, an AI firm that also offers “defense solutions,” whose stock hit an all-time high on Tuesday.
The ethics of war: The ethical issues at hand here are numerous. Generative AI systems are known to make errors, including those born from algorithmic bias and hallucination. That reality in the context of war is concerning.
But according to Dr. Elke Schwarz, a professor of political theory at the Queen Mary University of London, the issue is simple: “We don't want to get to a point where AI is used to make a decision to take a life when no human can be held responsible for that decision.”
“The history of warfare has shown us that the farther a soldier is away from their target, the easier it is to make their kill decision,” she said. “Now we’re at a point where a human is significantly removed from the kill decision. That is why it is so important that we have regulation at the international level.”
The problem is that, as Professor Maria Rosaria Taddeo of the Oxford Internet Institute has said, warfare is a “competitive game.” If one side has deployed AI, the other side might well need it to keep pace: “AI is now considered a key capability.”
Taddeo said that the key question here is one of moral responsibility and control; in other words, ensuring that there’s a way for a human to quickly identify errors and shut a system down.

AI has gone to war. That will not be undone.
We must be wary of predictive action, surveillance and fully-automated killing. Hopefully, legally binding international frameworks addressing that conundrum of just warfare can be established, though such a move feels unlikely at this stage.


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“The other image background has The Wave From Nowhere.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on SB 243:
A third of you don’t think the bill will go anywhere. 20% think it’s an easy one to pass.
The rest don’t know.
Nope, Big Tech will fight it:
“With these tech oligarchs hanging around Trump they will have the leverage to dictate what they want when they want it…. and what hits the dumpster. Feels like we’re headed towards the world of Blade Runner…..”
What do you think about Google's policy change? |
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.