- The Deep View
- Posts
- ⚙️ IBM teases a future of artificial reasoning
⚙️ IBM teases a future of artificial reasoning

Good morning. Coming to you live from Maine this week, where I promptly got snowed in while visiting my sister.
Any Mainers here?
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🌊 AI for Good: Ocean planning
🏛️ Senate passes legislation addressing deepfake porn
👁️🗨️ IBM teases a future of artificial reasoning
AI for Good: Ocean planning

Source: Unsplash
Though commercial whaling has largely stalled out, whale populations around the world remain threatened by a number of human activities, most prominently, shipping collisions, a crisis that has some scientists calling for better marine management.
What happened: Researchers at Rutgers have developed a tool — powered, of course, by machine learning — that is designed to help predict endangered whale habitats, allowing ships to avoid lethal collisions.
The model combined two datasets with more than three decades’ worth of information; one, a collection of satellite imagery and two, a collection of data gathered by underwater gliders, autonomous, data-gathering vessels.
The researchers’ original goal was to develop a system to support offshore wind developers, but they said that the end result can “inform conservation strategies and responsible ocean development.”
Why it matters: “With this program, we’re correlating the position of a whale in the ocean with environmental conditions,” Josh Kohut, a Rutgers professor of marine sciences, said. “This allows us to become much more informed on decision-making about where the whales might be. We can predict the time and location that represents a higher probability for whales to be around. This will enable us to implement different mitigation strategies to protect them.”

Do real work by talking
Try an AI-native workspace that helps you stay on top of everything—without the busywork.
How does it work?
Use Supertags to instantly transform notes and voice memos into useful things—like tasks, projects, clients, bugs, articles, ideas and more.
Get the information you need, where you need it, without having to search with customizable, contextual feeds.
Build powerful custom workflows with commands, events, different view options, and AI prompts
Senate passes legislation addressing deepfake porn

Source: Unsplash
The news: The Senate has unanimously passed the Take It Down Act, a bill introduced by Senators Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.) to protect people from deepfake harassment and revenge porn.
The Act would criminalize the publication of nonconsensual “intimate imagery,” something that explicitly includes “computer-generated” images and videos. It would also clarify that a person consenting to the creation of an image does not qualify as consent for the publication of said image.
It would additionally require websites to remove such content within 48 hours.
The bill was similarly unanimously passed by the Senate during the previous 118th Congress, but never made it through the House. U.S. representatives Maria Elvira Salazar (R-Fla.) and Madeleine Dean (D-Pa.) have already reintroduced companion legislation to the House.
The bill’s authors noted the primary impetus behind this legislation, that though dozens of states have enacted laws that prohibit the publication of nonconsensual, explicit images, with some even addressing deepfakes by name, the laws are wildly uneven, something that leaves victims exposed.
Cruz, calling for the House to pass the bill, said it would give “victims of revenge and deepfake pornography — many of whom are young girls — the ability to fight back.”
The landscape: This comes several years into a nonconsensual deepfake crisis, one that has predominantly, and very publicly, targeted women and young girls, from major celebrities to middle school students.



ChatGPT: Will you be my Valentine? More users are falling for AI companions (Semafor).
Syria just hosted its first international tech conference in 50 years (Rest of World).
Meta plans major investments into AI-powered humanoid robots (Bloomberg).
The murky ad-tech world powering surveillance of US military personnel (Wired).
Gen Z teens tell us why they stopped trusting experts in favor of influencers on TikTok (Fortune).
IBM teases a future of artificial reasoning

Source: IBM
The latest craze in artificial intelligence has to do with ‘reasoning’ models, Large Language Models (LLMs) that have been tweaked to ‘think’ longer before answering queries. The basis of this — on display across OpenAI’s o-series and DeepSeek’s R1 — involves Chain-of-Thought (CoT) reasoning, an older approach that, when combined with the latest LLMs, has had quite a powerful effect.
First highlighted in a 2022 paper from Google DeepMind researchers, CoT began as a prompting technique that has more recently evolved into an approach that’s being built into the models themselves.
“Basically, somebody figured out that if you said, ‘tell a model (to) think step by step,’ it actually produces better results,” Dr. David Cox, VP of AI models at IBM Research, told me.
“The model will actually take its time. It'll verbalize a few steps, and you'll get a better result in the end. And that's a very versatile thing to do. But if you just do that, then it has its limits,” he said. “It helps. But it's not life-changing.”
And while the industry has been trending for months now in the ‘reasoning’ direction, there was a definite shift in the wake of DeepSeek’s release of R1, a seemingly cheaper model that achieved parity with OpenAI’s models through reinforcement learning and CoT reasoning.
“Everyone had a really, really strong reaction to R1 coming out, which frankly confused us in the research field a little bit,” Cox said, explaining that DeepSeek, at least to those in the industry, didn’t exactly come out of nowhere. “We were already excited. We were already all working on it.”
And rather than waiting to release it, IBM decided to “just get something out there to show what we’ve been doing in the space.”
Granite can reason. Earlier this month, IBM published a preview release of a reasoning-enabled version of its Granite 3.1 8B model, part of IBM’s family of smaller language models designed to be paired with enterprise-specific datasets.
Where DeepSeek leveraged model distillation to achieve its results, IBM applied reinforcement learning directly to its Granite model to induce CoT reasoning, which ensures “that critical characteristics like the original model’s safety and general performance are preserved.”
As a result of this approach, IBM noted double-digit growth in benchmark performance that, notably, worked well across a wide range of specific tasks without sacrificing general performance.
The researchers noted no difference in safety performance between the reasoning-enabled and original models.
It’s a significant moment in the conflict and debate between large and small language models, where smaller models offer greater efficiencies but, generally, less robust performance.
“I think that's going to be a continuing trend that we can actually take these smaller models, which are very versatile, very fast, very efficient, and then virtually make them bigger on demand,” Cox said. “The idea that you could take a small model and have it do more things by having it spread out in time, that's something that I think is going to take hold across the board.”
And unlike the trend that we’re currently seeing of systems — like ChatGPT — that can switch between reasoning and non-reasoning models as needed, IBM designed this model so that users can essentially turn the CoT on or off — without changing models. Since CoT reasoning is both longer and more expensive than the alternative, it isn’t always necessary (or desirable). Because of this, IBM’s focus was on flexibility.
“We're building out this set of controllable, developer-friendly ways to add flags that just tell the model what we need it to do,” Cox said.
This work, according to Cox, is just the start of a long-term trend.
“We have a lot more going on in the reasoning space, all kinds of different kinds of reasoning work going on that you'll see in the coming months,” he said.
“I don't think in the long run, we're going to be in a world where we have just one giant model that tries to do everything,” Cox added. “We're going to have this cool set of small models that can extend and think … that's the world that we think we're heading toward. Put the developer in control, give them a toolset that can … accomplish different tasks and automate things and use this technology in ways that still keeps the developer and humans very much in control.”


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“Houses are built into hills/mountains in image 2 - very strange.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on Anthropic’s roadmap:
34% of you think it’s too hard to tell at this point, but 20% think DeepSeek took them down and 24% think it’s laughable.
10% of you think Anthropic’s 2027 revenue goals make sense.
haha, no way:
“If we are hitting a limit on the growth of the technology, these companies can't turn into profitable companies that quickly. If we have another surge in innovation that could boost profits rapidly, companies will dump billions more to chase that pursuit and use it to justify losing money. Everything is still too new to think in a couple of years they'll stop bleeding as much cash.”
What do you think about the Take It Down Act? |
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.