• The Deep View
  • Posts
  • ⚙️ Report: 81% of clinicians believe AI could ‘erode critical thinking’

⚙️ Report: 81% of clinicians believe AI could ‘erode critical thinking’

Good morning. Today, Elsevier published a report on researcher and clinician perspectives on AI. The story is unsurprisingly nuanced, with researchers seeing plenty of opportunity for AI to be both helpful and harmful.

We break it all down below.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: Predicting future lung cancer risk 

Source: MIT

Researchers at MIT last year designed a deep-learning algorithm — called “Sybil” — that is accurately able to predict an individual’s risk of developing lung cancer in the future. 

The details: Trained on data from the National Lung Screening Trial, Sybil only requires a single low-dose computed tomography (LDCT) scan and does not require clinical data or radiologist annotations. 

  • It is designed to run in the background on radiologist reading stations and was validated across three different data sets. 

  • The algorithm achieved concordance indices of .75, .8 and .81 across the three datasets (where anything above a .7 is good and anything above a .8 is strong). 

Sybil was able to accurately predict whether a patient would develop lung cancer in the next year with 86% to 94% accuracy. 

Why it matters: Lung cancer, which is the deadliest cancer in the world, is difficult to treat at more advanced stages. Detecting it early increases a patient’s five-year survival rate seven-fold, according to Florian Fintelman, one of the researchers. 

Plus, lung cancer screening programs are underdeveloped around the world, meaning that something like Sybil could help “bridge this gap,” bringing personalized health care to more people and enabling patients to detect lung cancer early so that treatment is more effective.

Restore your hairline with Hims

“Hims has really given me my confidence back.” 

“It has made me more confident, and I am proud to show other people about it.” 

“I noticed a huge change in the overall health and fullness of my hairline.”

Hims offers a range of products that can help stop hair loss and regrow hair. And the process is quick, painless and 100% online. Just fill out an intake form and a medical provider will review your information and make treatment recommendations, where appropriate.

  • Your treatment will then ship (freely and discreetly) to your front door, if prescribed. 

Restore your hairline with Hims. Get started today.

*Prescription products require an online consultation with a healthcare provider who will determine if a prescription is appropriate. Restrictions apply. See website for full details and important safety information.

*Results have not been independently verified. Individual results will vary. Customers were given free product.

Anthropic CEO says $1 billion AI models are in production

Source: Unsplash

Yesterday, we broke down some of the math behind the AI bubble, basically that industry-wide costs are super high and revenue is not meeting them. 

Anthropic CEO Dario Amodei expects those costs to continue to rise. 

The details: Speaking on the In Good Company podcast, Amodei said that models in training today could cost around $1 billion in training alone. 

  • "I think if we go to ten or a hundred billion, and I think that will happen in 2025, 2026, maybe 2027, and the algorithmic improvements continue a pace, and the chip improvements continue a pace, then I think there is in my mind a good chance that by that time we'll be able to get models that are better than most humans at most things,” he said. 

  • There is no evidence, however, to suggest that this push deeper into large language models will actually result in some iteration of artificial general intelligence; the evidence is clear that current models cannot reliably extrapolate beyond their training data. 

The context: The cost of model training has spiked exponentially over the past few years. In 2017, the original Transformer model cost $900 to train, according to the 2024 AI Index Report

  • OpenAI’s GPT-4, meanwhile, cost over $100 million to train. These increased costs do not include the cost of daily operations or the environmental cost of rising carbon emissions). 

  • These steadily increasing costs, according to the AI Index, have “effectively excluded universities, traditionally centers of AI research, from developing their own leading-edge foundation model.”

My View: A near-term future of exponentially more expensive models will likely further complicate this AI bubble, as at today’s (still high) costs, the revenue isn’t there. I don’t know what will change revenue-wise if OpenAI drops $100 billion to train GPT-5. What is clear, though, is that today’s $600 billion AI hole will just continue to get bigger, and that isn’t sustainable. 

  • Share 2 Days with the Brightest Minds in AI including scientists and engineers from OpenAI, Meta, Deepmind, and many more.

    • This week only — Register using discount code: deepview24 for a $350 discount!*

  • Chinese self-driving cars have quietly traveled 1.8 million miles on U.S. roads, collecting detailed data with cameras and lasers (Fortune).

  • Meet the AI-powered robots that Big Tech thinks can solve a global labor shortage (CNBC).

  • Boeing’s Plea Deal Shows The Importance Of Accountability After A Crisis (Forbes).

  • The fuzzy math behind ScaleAI’s valuation (The Information).

  • Why spend hours on your resume when you can do it in minutes?

    🤝 Trusted by over 5 million professionals, Kickresume uses ChatGPT-4 and proven job-winning phrases to help you land your dream job. Make your CV stand out today.*

Watch a robot move sheet metal in a BMW plant

Source: Figure

In January, Figure AI — the AI x robotics company designing humanoid robots intended to reshape the labor market — closed a deal with BMW to deploy a team of its robots throughout some of BMW’s manufacturing facilities.

  • Figure recently released an update on its progress in a brief clip in which a Figure 01 robot can be seen picking up a piece of sheet metal and putting it in place. 

  • Figure CEO Brett Adcock said it was all fully autonomous; the robots are powered by neural networks and navigation is the result of simulated object training. 

Effortlessly Schedule Developer Interviews with PestoAI

Imagine effortlessly adding top-tier developers to your interview calendar.

PestoAI can source and interview developers from around the globe, outperforming the average recruiter. You only pay when you succeed.

Using PestoAI feels like magic. Here's what happens behind the scenes:

  • Targeted Sourcing: Our AI curates candidates based on your job description, using a model trained by local market recruiters.

  • Company Pitch: We pitch your company to the best-fit developers.

  • Technical Screening: Candidates undergo a thorough technical interview using our proprietary technology.

  • Seamless Scheduling: We schedule the interviews directly on your calendar.

For Deep View readers, we are offering an exclusive 50% OFF if you sign up today!

Report: 81% of clinicians believe AI could ‘erode critical thinking’

Source: Elsevier

A survey released today by scientific publisher and analytics company Elsevier found that almost all clinicians surveyed have heard of artificial intelligence, though only 19% have used ChatGPT for work. 

Key highlights: The vast majority of those surveyed said that they believe AI can be used to accelerate knowledge discovery, provide cost-savings for hospitals and institutions and increase the quality of their work. 

  • The vast majority of those surveyed also said that they believe AI can be used to amplify misinformation, make clinicians overly reliant on AI tools and “erode human critical thinking.” 

  • At the same time that they expect AI to transform medical education and empower those in the medical field, clinicians still have serious concerns about reliability, accuracy, transparency, governance, accountability and data security/privacy. 

One doctor said that “these tools are not yet based on scientific evidence, do not provide references and are not yet reliable.”

Still, around two-thirds of clinicians who are not currently using AI expect to be within the next 2-to-5 years. 

I sat down with Elsevier’s CTO of Health Markets Rhett Alden to discuss the report. 

The following has been lightly edited for clarity and length. 

The Deep View: What findings in the report were most surprising to you, or most expected? 

Rhett Alden: I think the optimism was somewhat expected. But there's also underlying concern that you can see in some of the responses, so it's this balance of optimism and anxiety that you see kind of flowing through through the survey. And that's something that we also encountered in some earlier surveys that we've done — people are excited but also apprehensive. And you can see that in some of the answers.

The Deep View: The report highlights clinician concerns about misinformation alongside overwhelmingly optimistic impressions on the ways in which AI can improve and transform education. These two things seem at odds — how do you reconcile them? 

Rhett Alden: In some ways, it's highlighting what's already there. What I mean by that is that there is misinformation in peer-reviewed literature: there's incorrect information, there's also plenary information. What this gives you is liquidity of access, which allows that information to flow more easily. So I think it's bringing to the fore the concerns about spurious or suspicious data, but from my perspective, that already exists. 

This is one of the tensions in AI that on the one hand, it's an incredible tool if you're knowledgeable. I think it could be a suspicious tool if you're not; if you're inexperienced, you don't know when to call BS on it.

And we're trying to be really cautious with students, for example, because students don't know. So you need to be really careful in terms of how you convey information, making sure that the information is measured and has some level of confidence behind it.

The Deep View: Researchers made clear in the report that they’re looking for regulation; what kind of regulation would settle their concerns? 

Rhett Alden: What is lacking right now is solid governance. And I think what people are jumping to is that they need external regulation to solve that problem because it's been moving so fast. And I think there's a role for the FDA or external regulatory bodies for sure. 

But there's also a role for healthcare institutions and even educational systems to really think about how they govern internally.

The Deep View: What do you think will change in the next 2-to-5 years that will get clinicians who are not currently using AI to use it? 

Rhett Alden: I think it's regulation and controls. That's going to be a big piece. And the other one is general acceptance. What we found in the market is that, in healthcare in particular, people don't want to be first in anything. I've been in healthcare for a long time, and innovation moves slowly. I think what you're seeing here is sort of emblematic of, ‘I want to see somebody else kick the tires, and then I'll move forward with it.’

What it really does show though, is that people see the tremendous opportunity. That is really encouraging. What people need to understand is how to demystify it.

This is a tool, right? And like any tool it needs to be used appropriately. I use the example of a hammer; you can hammer a nail or you could kill somebody. And it's the same thing here — you need to demystify this and say ‘look, this is a tool to help you with certain tasks, and then some tasks aren't appropriate for it.’ And that’s where governance really comes in, to help people understand that. 

It’s not magic. That’s what I’m trying to convince people of.

“It’s not magic” might represent the three most important words in the field of artificial intelligence.

Which image is real?

Login or Subscribe to participate in polls.

A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Your view on switching to privacy-focused software:

35% of you said you would absolutely make the switch to privacy-focused software, while 15% said you already have.

But 20% of you don’t care about privacy/AI training and a further 12% said that your current services are just too convenient to leave, privacy or not.

Something else:

  • “I would consider it depending on the full pros and cons comparison, including ongoing costs and ability to operate efficiently and effectively. For many, it'll also depend on the sensitivity of the data they store”

What do you think about AI in a clinical environment?

Login or Subscribe to participate in polls.