- The Deep View
- Posts
- ⚙️ Amazon’s new model release highlights the fuzzy applicability of LLMs
⚙️ Amazon’s new model release highlights the fuzzy applicability of LLMs

Good morning. There are always so, so many things going on in this field.
But, perhaps more importantly, Mumford and Sons dropped a new album after a seven-year hiatus (and it’s good!). That, my friends, is big news.
Also, OpenAI officially closed a $40 billion funding round at a $300 billion valuation, the largest private funding round ever. That’s nearly double the valuation of the Walt Disney Company, which brought in more than $90 billion in revenue in 2024 on earnings of $2.72 per share. OpenAI, meanwhile, doesn’t see a path to profitability until the end of the decade.
More on that tomorrow.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
⚕️ AI for Good: Drug repurposing
🩺 Apple’s got an AI health ‘coach’ on the way
🔬 Google’s Isomorphic Labs raises $600 million in first external funding round
👁️🗨️ Amazon’s new model release highlights the fuzzy applicability of LLMs
AI for Good: Drug repurposing

Source: Unsplash
We’ve talked before about the clinical plight of rare diseases, that collection of at least 7,000 diseases that don’t get too much attention because they don’t individually impact more than 200,000 people in the U.S. But, in aggregate, rare diseases impact a minimum of 30 million Americans, and hundreds of millions of people around the world.
The combination of intense impact and lack of focus makes the study of these diseases ripe for the application of machine learning tools and technologies. But it goes beyond simple diagnostics and into clinical drug explorations.
What happened: Specifically, I’m talking about drug repurposing, the discovery of new drugs from existing (and already FDA-approved) medicines. Last year, researchers at Harvard Medicine launched an AI model called TxGNN that’s specifically designed to identify candidates for rare diseases among existing drugs.
Trained on a vast quantity of biological and medical data — including DNA information and gene activity — the model was validated across more than a million patient records to identify subtle commonalities between rare illnesses.
It is made up of two components; one that identifies potential candidates and their side effects, and another that provides a rationale for its identification, injecting a needed boost of transparency and explainabilty into related medical decision-making.
In a test, the TxGNN identified potential candidates among a pool of 8,000 existing medicines for more than 17,000 diseases.
Why it matters: Solely identifying candidates doesn’t make them accessible; the model’s predictions then require evaluation and experimentation. Still, the model — which the researchers freely released — promises to drastically speed up the process.
“We’ve tended to rely on luck and serendipity rather than on strategy, which limits drug discovery to diseases for which drugs already exist,” Marinka Zitnik, the paper’s lead researcher, said in a statement. “Even for more common diseases with approved treatments, new drugs could offer alternatives with fewer side effects or replace drugs that are ineffective for certain patients.”
The team of researchers has already begun work with rare disease foundations to identify potential treatments.

This AI Tech Is The Secret Sauce Behind Ramp’s Insane Growth
If you aren’t familiar with Ramp, you should be – the $8b fintech startup is notorious for speed, having scaled to $100M in annual recurring revenue faster than any startup in history. Not too shabby.
But there’s a secret behind this rapid success, and it’s called Momentum.io.
This AI tool allowed Ramp to scale at speed by optimizing and automating every aspect of their sales execution strategy, from CRM data entry to forecasting, sales coaching, and beyond. The end result is a testament to the growing impact of automation tools – and we have a feeling Momentum is just getting started.
Apple’s got an AI health ‘coach’ on the way

An older image of Apple’s Health App. Source: Apple
Apple is working on a big revamp of its Health app, according to Bloomberg’s Mark Gurman, a revamp that will include an AI-powered health coach. And it’s set to launch with IOS 19.4, an update set to go live sometime in the first half of 2026.
The details: According to Gurman, the effort — nicknamed Project Mulberry — has been in the works for years. The general idea is that the Health app will collect all sorts of data from across a user’s Apple ecosystem, data that will then be processed by an AI health coach to provide personalized health recommendations.
Apple is currently training the system with data from in-house doctors, and plans to bring in more doctors — experts across sleep, nutrition, mental health and cardiology — to make custom videos for the app.
The plan, reportedly, is for this to expand into food tracking and fitness tracking, where Apple would use iPhone cameras and AI systems to provide pointers on exercise technique and form.
According to Gurman, the project is currently the top priority for Apple’s Health team.
Apple did not return a request for comment on the report.
Going deeper: This rather succinctly highlights a major tension in the integration of AI in healthcare. In order for these algorithms to work as advertised, they need constant access to an enormous, ceaseless stream of data, which raises numerous questions over data privacy and data security.
For a company that has always prided itself on privacy, such an integration isn’t easy to pull off. And at this stage, it’s not clear how Apple plans to navigate this environment; will it do all computing on-device? Is that even possible for something like this? If not, how will it guarantee its Cloud is genuinely secure? How will Apple ensure it doesn’t gather data on people beyond the specific user (other folks walking around in the gym, for instance)? And what happens if governments or insurance companies want to gain access to those highly specific insights?
I will add to this that reliability remains an unsolved problem in most modern AI applications. In speaking of “agents,” the assumption is Apple will be leveraging language models, which are known to get stuff wrong; and when it comes to building trust among users for health recommendations, I’d say the room for error is zero but the capacity for error is high.


Data Breach: Cloud giant Oracle is reportedly in the midst of dealing with multiple security breaches, though isn’t being all too forthcoming with details about what’s actually going on.
Open OpenAI? Sam Altman said Monday that OpenAI is planning to “release a powerful new open-weight language model with reasoning in the coming months.” This would mark the company’s first somewhat ‘open’ release since GPT-2.

Gemini hackers can deliver more potent attacks with a helping hand from … Gemini (Ars Technica).
Inside YouTube’s weird world of fake movie trailers — and how studios are secretly cashing in on the AI-fueled videos (Deadline).
Some large cloud customers are slowing down AI spending as DeepSeeks drags prices down (The Information).
Hedge funds turn defensive amid tariff chaos, selling tech stocks at the fastest pace in 6 months (CNBC).
An AI image generator’s exposed database reveals what people really use it for (Wired).
Google’s Isomorphic Labs raises $600 million in first external funding round

Demis Hassabis. Source: Isomorphic Labs
Isomorphic Labs, the startup that’s using AI to aid drug discovery, announced Monday the closure of a $600 million funding round. It’s unclear at what valuation the startup raised funds.
The details: Isomorphic Labs said in a statement that the funding will broadly be used to accelerate its efforts and scale and progress its pipeline of drug candidates.
The round was led by Thrive Capital, with additional participation from Google parent Alphabet. This marks Isomorphic Labs’ first external funding round. (The company spun out of Google in 2021 and collaborated with DeepMind to produce AlphaFold 3 in 2024).
Last year, Isomorphic Labs secured partnerships with pharmaceutical giants Novartis and Eli Lilly to advance work on small molecule research. Details, specifics and timelines remain unknown, though Isomorphic said in its Monday statement that Novartis recently expanded the scope of their partnership.
The company is also working internally on advancing drug discovery in oncology and immunology, though its website remains devoid of any sort of pipeline-relevant information. In other words, we don’t know what specific targets Isomorphic is studying, how far these targets have progressed and their status as it relates to studies and clinical trials.
Founder and CEO Demis Hassabis called the funding round a “significant step forward towards our mission of one day solving all disease with the help of AI.”
Amazon’s new model release highlights the fuzzy applicability of LLMs

Source: Amazon
Amazon on Monday launched a research preview of a new foundation model, Nova Act, a model trained to “perform actions within a web browser.” It’s pretty similar to Anthropic’s Computer Use, a competitor on the foundation model side of AI that Amazon has also invested billions of dollars in.
Amazon called Nova Act a “crucial step … toward building reliable agents by enabling developers to break down complex workflows into atomic commands.”
The release brings Amazon into a tensely competitive race with a range of Big Tech and Big Tech-funded companies that have all been pushing into so-called agentic AI. Amazon claims Nova Act outperforms the competition on benchmarks, but the real world remains a far stricter grader — reliability has been a key problem with all language model-based tools, and it remains unclear how or if Amazon’s venture will address it adequately.
Going deeper: Amazon at the same time made its family of foundation models — Amazon Nova — broadly accessible to the general public with the launch of a chatbot interface that puts it in more direct competition with OpenAI, Anthropic and the like.
Amazon first released a research preview of Nova in December. You can read the model’s technical report — which doesn’t specify training data, electrical intensity or carbon emissions — here. Rohit Prasad, SVP of Amazon’s AGI unit, said in a statement that the website “puts the power of Amazon’s frontier intelligence into the hands of every developer and tech enthusiast, making it easier than ever to explore the capabilities of Amazon Nova.”
All it takes to try out the model is an Amazon account. According to Nova’s terms of use, Amazon “records, processes and retains” all user interactions with its Nova chatbot, which it might use to train the models. Amazon warns here, in the fine print of the fine print, that “you should not submit any personal, confidential or sensitive information.”
In a brief test of Nova Pro, I asked the model to output 20 sentences of exactly 14 words each that contain internal rhymes (I was thinking about Lin Manuel Miranda). Not a single sentence Nova outputted was 14 words long, and not a single sentence exhibited internal rhymes.
In fact, Nova instead went for end rhymes, concluding almost every sentence in either the word “gleam” or the word “stream.”
I tested the same prompt on ChatGPT, Claude 3.7 Sonnet and Grok 3 — not a single model was able to output a single sentence in the batch that was 14 words long. The others were able to output internal rhyme, but, well … “the light of night guides my sight across the vast mountain height.”
When I followed up by specifying that each line must contain just one internal rhyme with the following line, all four chatbots failed utterly.
Amazon Nova, as it reminds users in print sized Arial 7.5, “may not always get it right.”

A screenshot of my chat with Nova Pro.

We’ll come back to my little evaluation metric in a minute.
The chatbot’s landing page features a rotating list of prompt suggestions that, to me, highlight the adoption problem the industry is facing.
For instance: “write a product review for a time machine that only travels 24 hours into the past.” And "compose a love letter from a book to its reader,” which sat next to: “create a Python Flask app to build a simple rest API” and “write an instruction manual for building bridges in San Francisco.”
There is a lack of clarity and focus among developers of general chatbots that this seems to exemplify. There is a clear push, here, to attract the consumer — I can’t speculate why that is, other than the fact that it’s easier to get what you want in terms of regulation when the masses are on board.
But the money to be made here has nothing to do with the consumer; it has everything to do with enterprises, big businesses, government organizations, etc. High stakes, high-cost applications, like building bridges. The problem with this, as NASA researchers recently pointed out, is that language models are not fit for high-stakes applications, since the systems can’t distinguish between truth and fiction (and since they don’t actually understand the tokens they process).
I will add to this that, more than two years into this GenAI race — and an internet’s worth of training data later — an inability to produce a few sentences in line with a couple of very specific constraints does not give me any confidence that these systems should be used in environments where getting the answer right actually matters.


Which image is real? |



🤔 Your thought process:
Selected Image 2 (Left):
“One of the tree trunk shadows in image 1 looked drawn and filled in with a marker. On top of that, I cycle about 3,000 km in the Rocky Mountains near Banff every year and Image 2 just looked like my photos. The real deal.” Awesome.
Selected Image 1 (Right):
“This one was very difficult - the clarity in Image 2 was almost too perfect.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on Musk’s X acquisition:
A third of you think the move seems sketchy; 24% aren’t surprised, another 24% don’t think it’ll actually change much for end users and 18% think it’s a solid move.
Something else:
“X is losing money. xAI is not. Best way to avoid paying taxes and saving it from potential creditors.”
Something else:
“If you imagine being a billionaire who wants to eventually (but not yet) get rid of a failing company that you own, I imagine that selling it to another company you own would be a way to get a tax write off of some kind. And by selling X to xAI, then he would technically have more legal grounds to claim xAI can continue to use the data gleaned from X.”
Would you be excited for an Apple AI health coach? |
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.