• The Deep View
  • Posts
  • ⚙️ Report: AI for healthcare prediction needs to slow down

⚙️ Report: AI for healthcare prediction needs to slow down

Good morning. On this lovely Friday, we have a whole lot of OpenAI-related news to talk about. First, the company is in talks to raise funding at a $150 billion valuation.

And second, OpenAI released Strawberry (now known as OpenAI o1). So, yeah, GPT-5 remains nowhere in sight.

Let us know if you’ve begun playing with the new model.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: A digital twin of Earth

Source: DestinE

This summer, scientists launched an early version of a “digital twin” of Earth, an AI-powered simulation of the planet designed to provide scientists with accurate and reliable data and predictions regarding weather patterns and climate change. 

The details: The model, a result of Europe’s Destination Earth (DestinE) project, began development in 2023. Officials have said that it will undergo regular enhancements until 2030, when they expect to have a “full” digital twin of the planet. 

  • The current version of the project consists of two models, one for climate adaptation and one that tracks extreme weather patterns. 

  • The system is powered by a supercomputer that, according to DestinE, is among Europe’s most powerful (and greenest) supercomputers. The climate impact of the supercomputers powering a model intended to help climate adaptation cannot be ignored, and remains unclear. 

Why it matters: The idea of a digital twin of Earth, which has been explored for years, is in enhanced modeling abilities that account for a wider scope of impact. In this instance, DestinE has said that the project will enable Europe to better respond to natural disasters and climate change, while also having a clearer idea of the widespread impact of certain mitigation measures before they’re enacted. 

Imagine a future where your business runs like a well-oiled machine, effortlessly growing and thriving while you focus on what truly matters.

This isn't a dream — it's the power of AI, and it's within your reach.

Join our AI Business Growth & Strategy Masterclass and discover how to revolutionise your approach to business on 12th September at 10 AM EST.
In just 4 hours, you’ll gain the tools, insights, and strategies to not just survive, but dominate your market.

What You’ll Experience: 
🌟 Discover AI techniques that give you a competitive edge
💡 Learn how to pivot your business model for unstoppable growth
💼 Develop AI-driven strategies that turn challenges into opportunities
Free up your time and energy by automating the mundane, focusing on what you love

This is more than just a workshop—it's a turning point.

The first 100 to register get in for FREE. Don’t miss the chance to change your business trajectory forever.

OpenAI is eyeing a $150 billion valuation 

Source: Created with AI by The Deep View

In July, we talked about OpenAI’s money problems, specifically the fact that it’s on track to burn roughly $5 billion this year. At the time, The Information predicted that OpenAI would need to raise cash within the next 12 months if it wanted to remain a going concern. 

That cash raise now looks imminent. 

What happened: Bloomberg and The Information reported that OpenAI is in talks to raise $6.5 billion in new funding at a $150 billion valuation. 

  • Bloomberg reported that OpenAI is also in talks to raise an additional $5 billion in debt from banks in the form of a revolving credit facility. OpenAI didn’t respond to a request for comment. 

  • This $150 billion valuation — which would put OpenAI on par with SpaceX as one of the most valuable privately-held companies in the country — is roughly 44x its last reported annualized revenue of $3.4 billion. 

The Information reported earlier this week that new OpenAI investors don’t get equity; they instead get a promise of a portion of OpenAI’s profits. Despite its enormous valuation, OpenAI has no profits to speak of. 

For comparison, AT&T brought in nearly $170 billion in revenue in 2022. The company is valued at a little over $150 billion. 

  • Nvidia, OpenAI, Anthropic and Google execs meet with White House to talk AI energy and data centers (CNBC).

  • Australia threatens fines for social media giants enabling misinformation (Reuters).

  • Why NASA is sticking with Boeing (The Verge).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

Hiring the best Machine Learning Engineer just got 70% cheaper.

Today we are highlighting AI talent for you courtesy of our partner, Athyna. If you are looking for the best bespoke tech talent, these stars are ready to work with you — today! Reach out here if we can make an introduction and get a $1,000 discount on us.

OpenAI launches Strawberry

Research team behind o1 (OpenAI).

Strawberry has arrived. 

OpenAI on Thursday launched OpenAI o1, a new large language model trained, in OpenAI’s words, to “think” before speaking. It launched a preview of the model — o1 Preview and o1 Mini — yesterday for immediate use in ChatGPT. 

  • OpenAI said the model significantly outperforms GPT-4o on a series of “reasoning” benchmarks. 

  • The training data and environmental impact of the model remain obscured. 

OpenAI said that the time o1 takes to “think” makes the model more robust in terms of guardrails and other safety rules. The firm said that these same enhancements open up new risks; the models, according to o1’s System Card, pose a “medium risk” of persuasion, so it’s fine to be released. 

The context: As Hugging Face CEO Clem Delangue pointed out, despite all of OpenAI’s repeated marketing efforts to the contrary, the model is not thinking; “it's ‘processing,’ ‘running predictions’ ... just like Google or computers do.”

“Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it's more clever than it is,” he said. 

It has to be noted that this release — and this anthropomorphic, over-hyped language — comes conspicuously while OpenAI is in the middle of talks to raise billions of dollars of much-needed funding. It also must be noted that model releases by blog post are not peer-reviewed, and so lack any kind of scientific rigor. 

I will be exploring o1 in more depth over the coming days and weeks.

Report: AI in healthcare needs to slow down

Source: Unsplash

In the midst of medical staffing shortages — and faced with the potential for cures to diseases we’ve been battling for decades — the integration of artificial intelligence into healthcare is perhaps one of the most highly-anticipated applications of the technology. 

One of these applications involves AI-powered genomic health prediction (AIGHP), an AI-driven process that involves the prediction of an individual's future health and drug responses based on genomic data. 

While not yet widely used, AIGHP has generated plenty of excitement, with the U.K. funding projects and pushing strategies that would center around the use of AIGHP in enhancing the country’s approach to preventive medicine. 

A new report from the Nuffield Council on Bioethics (NCOB) and the Ada Lovelace Institute, however, warns that the technology isn’t ready yet. The report recommends that the government “rule out” the widespread deployment of AIGHP until certain conditions are met. 

The benefits: The report, a culmination of nearly two years of research, found that AIGHP could “bring significant benefits” to healthcare, but only if its properly integrated. 

  • At its best, the technology could pre-warn individuals about their likelihood to develop particular diseases, enabling lifestyle changes and other preventive actions. 

  • For medical systems, it could enable prioritization, reduce waste and enhance resource allocation, all while providing better results. 

Earlier intervention, the report says, is better than a cure. And prevention is better than early intervention. 

The risks: The problem, according to the report, is that the science around AIGHP — regarding levels of accuracy and utility — remains uncertain. 

  • Research has found that the tech doesn’t perform evenly for all individuals and disease types; further, datasets largely include European genetic information, meaning the systems can “perform badly” for people of non-European descent. 

  • “There is no consensus on whether these difficulties can be overcome, or in what time frame,” according to the report. 

In addition to this, there are enormous risks and ethical concerns of data privacy, combined with the heightened potential for targeted discrimination. Genetic information, is, after all, highly sensitive, and could enhance harm for people, rather than the opposite. 

“A common worry … is that people deemed more likely to fall ill because of their DNA might be offered worse terms for health insurance,” the report found. 

There is also a risk here of misplaced overreliance on a piece of technology that could result in a reduction in funding to more traditional, though no less valuable, forms of medicine. 

What we should do: In order to access the benefits here, the rollout needs to be cautious. The report recommends that the government should not deploy such technology until it meets certain conditions. 

  • The researchers said that minimum standards of accuracy and reliability for AIGHP systems need to be defined, and a method of testing model performance against those standards needs to be established. 

  • The report also suggests that the government needs to update and strengthen its laws and regulations regarding surveillance, data privacy and genomic discrimination, with added legislation specifically preventing genomic discrimination by insurers. 

“AI-powered genomic health prediction has the potential to offer us a lot, but … we are not ready to fully embrace it, and nor is it ready to deliver on its promises,” Professor Sarah Cunningham-Burley, chair of the Nuffield Council on Bioethics, said in a statement. “Only by embedding ethical considerations from the outset, will AIGHP reach its full potential.”

I think part of the urgent, accelerationist mindset we see in AI is that it seems like we’re right on the edge of the kind of innovations that could legitimately improve and save lives.

The reality is that these innovations are somewhere on the horizon; they’re not yet within reach. 

We have a lot of science, engineering and regulating to do before they’re here. 

If we rush into things without considering that gap, the tech will cause harm. We need to be slower, more methodical and far more scientific. 

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “Field of focus in Image 1 was realistic.”

Selected Image 2 (Right):

  • “Ha! You fooled me with lousy color and focus. Darn, AI can be bad, too!”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on how long it’ll take for safe self-driving cars to be a thing:

A third of you think it’ll take 5-10 years; 17% said both less than five years and 10-20 years. 20% said 50 years (and possibly never).

Less than 5:

  • “I think Tesla will release FSD within a year - maybe two at the most. Add 3 or 4 more years for it to become the norm - at least for Tesla owners. Now I am regretting my choice, because it will take a lot longer to become the "norm" in general. Say 10-20 years.”

5-10:

  • “I think the biggest obstacles are poor driving conditions that affect vehicle sensors and laws governing autonomous driving.”

What do you think is up next for OpenAI?

Login or Subscribe to participate in polls.