• The Deep View
  • Posts
  • ⚙️ Report: Generative AI boosts physician performance

⚙️ Report: Generative AI boosts physician performance

Good morning. That $14 million Super Bowl ad that OpenAI ran was, funnily enough, made by human animators, not generative AI. 

The height of irony.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 🎙️ AI for Good: Smart, wild microphones 

  • 🔬 OpenAI close to mass-producing in-house chips 

  • 🧠 AI and the death of critical thinking: Microsoft Research

  • ⚕️ Report: Generative AI boosts physician performance 

AI for Good: Smart, wild microphones 

Source: Synature

Though that human art of listening to nature has been around since the dawn of mankind, the scientific field of the same — bioacoustics — is only a few decades old. But the field has been growing steadily more popular as technological advancements have consistently improved scientists’ ability to record and study the ambient sounds of nature. 

What happened: The latest advancement involves “smart” microphones, high-tech audio recorders that have been empowered with AI to collect and collate massive amounts of natural data. 

The details: For a long time, scientists leveraging bioacoustics would record plenty of raw data, then analyze it by hand at a later date, a rather time-consuming process. 

  • Synature, a startup spun out of Swiss university EPFL, designed a smart, robust microphone that autonomously gathers ambient audio data and transmits that data to an associated app. 

  • AI algorithms, meanwhile, run through the whole process, filtering out background noises and identifying sounds made by distinct species. The system then provides insights into the health of a given ecosystem based on all of that data. 

Why it matters: Conservationists, environmental researchers, governments and corporations alike can do more good for the environment — and mitigate their negative impacts — if they better understand the details of ecosystem health. This makes those details far more accessible. 

MASTER CHATGPT AT WORK: FREE GUIDE FOR AI-SAVVY PROFESSIONALS

Ready to transform your workflow with AI?

Our comprehensive guide "How to Use ChatGPT at Work" puts the power of generative AI at your fingertips.

Inside your free guide, you'll discover:

  • A clear, practical introduction to ChatGPT's capabilities

  • Real-world applications that immediately boost your productivity

  • 100 ready-to-use prompts for daily tasks like email writing, content creation, and data analysis

  • Expert strategies to overcome common AI implementation challenges

Join thousands of professionals who've already unlocked new levels of productivity with this essential resource.

OpenAI close to mass-producing in-house chips 

Source: Unsplash

The news: Reuters reported Monday — based on anonymous sourcing — that OpenAI is finalizing the design of its first in-house AI chip, and plans to send it to TSMC for fabrication within the next few months. 

The details: If the resulting processes go smoothly, which isn’t a guarantee in the chip fabrication world, OpenAI might be on track to begin mass production of these chips by 2026. 

  • The chip, once in hand, will be deployed on a limited scale at first, according to Reuters. 

  • TSMC declined comment on the report; OpenAI did not return a request for comment.

Why it matters: At the center of the AI Race has been an overwhelming need for the complex chips optimized for training and operating generative AI models; this need, combined with the stroke of luck or foresight that placed Nvidia as the only company (at the beginning) with the necessary chips on hand, is what made Nvidia the dominant megacap it is today. 

When the Big Tech firms and AI startups alike announce intentions to build out “AI infrastructure,” largely, they’re referring to the mass purchase of hundreds of thousands more Nvidia GPUs, resulting in the clearest bull thesis for the impact of AI on the stock market. Picks and shovels do well during a gold rush. 

But the players within this circular ecosystem would rather not be so reliant on Nvidia — Microsoft, Google, Amazon, Meta and IBM have all designed their own in-house, custom AI chips. Some, like Amazon, are trying to challenge Nvidia; others, like Meta (and now, OpenAI) are just trying to lessen their reliance on the company. 

Still, shares of Nvidia rose on Monday, a possible result of TSMC’s strong January revenue numbers. 

The Internet of Intelligence is here

AGI is approaching quickly, but there's a choice in how it will be structured. Rather than centralized, opaque monoliths running the show, Naptha.AI will allow you to:

  • Contribute to an open ecosystem of billions of autonomous AI agents operating in decentralized, cooperative networks. These agents will work together to process data, predict trends, and generate real-time insights across various industries.

  • Play a part in creating the Internet of Intelligence, a decentralized environment where AI agents work cooperatively to solve complex tasks.

  • Lead product development from the outside, enabling rapid innovation and ensuring products evolve in alignment with real-world needs and user feedback, keeping them relevant and adaptable.

Want to know more? Access Naptha.AI today.

  • Paris’ AI Action Summit is officially almost over, and it has come under fierce criticism already for not doing near enough to genuinely address any of the harms posed by the technology.

  • Anthropic launched an Economic Index to track the impacts of AI on the labor market; the first paper associated with the index found that nearly 40% of all Claude queries had to do with software engineering and coding, and that more than half of Claude-use focused on augmentation rather than outright replacement.

  • Musk-led group offers $97.4 billion for control of OpenAI (WSJ).

  • Gen Z ‘nihilism’ over Chinese tech fears shows gulf with Washington (Semafor).

  • DOGE is now inside the Consumer Financial Protection Bureau (Wired).

  • AI is impersonating human therapists. Can it be stopped? (Vox).

  • France unveils 109-billion-euro AI investment as Europe looks to keep up with U.S. (CNBC).

AI and the death of critical thinking: Microsoft Research

Source: Unsplash

I have no idea how to read a map. My penmanship sucks. The only phone numbers I know by heart are the same two that I learned when I was five years old. 

Technology has enabled the atrophy of skills that were once vital, a phenomenon that goes back thousands of years. Socrates, for instance, worried that the act of writing would degrade the power of people’s minds and memories. 

If Socrates didn’t like writing, I shudder to imagine what he would think of generative AI, a technology that, by its very nature and intended purpose, offers up a sharper, more rapid type of skill atrophy: the atrophy of critical thinking. 

What happened: A new study conducted by Microsoft Research sought to examine the impact of generative AI on critical thinking specifically among knowledge workers. The work builds on a slowly growing area of research into the societal impact — across education, health, creativity and corporate productivity — associated with integrating generative AI. 

  • The paper explored the results of surveys with more than 300 people across nearly a thousand first-hand examples of generative AI use in the workplace. 

  • The researchers found that GenAI tools “appear to reduce the perceived effort required for critical thinking tasks among knowledge workers, especially when they have higher confidence in AI capabilities.” 

Conversely, workers who are more confident in their own skills tend to think harder when it comes to evaluating and applying generated output. But either way, the data shows “a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight.” 

Though the study has a number of limitations, the researchers determined that the regular use of generative AI is causing a shift from information gathering to verification, from problem-solving to AI response integration and from task-doing to “task stewardship.” 

“While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving,” the researchers wrote, adding that such systems ought to be designed to support critical thinking in people, not to diminish it. 

Report: Generative AI boosts physician performance 

Source: Unsplash

In an environment composed of wild hype and persistent caution, the integration of AI into the medical world remains a paramount focus for developers, and, for many AI researchers, the primary promise of the technology on hand. 

There have been increasing studies in recent months and years that demonstrate the capabilities of machine learning systems to aid in diagnosis and drug creation; but there’s an evolution to this integration that goes a step further, into the decidedly murkier arena of split-second decisions that doctors are faced with on the regular. 

A recent study out of Stanford indicates that generative AI works there, as well. 

The details: Doctors Ethan Goh and Jonathan Chen, alongside a team of Stanford researchers, have been attempting to discern whether Large Language Models (LLMs) are capable of handling nuanced medical scenarios, answering treatment-relevant questions that don’t have a clear right or wrong answer (for example, what should be the immediate next steps if a doctor inadvertently discovers a sizeable mass in a patient?). 

  • The team conducted a large, randomized trial of 92 physicians across three groups. One group was just a chatbot on its own; one group of 46 physicians had access to the chatbot and the other group of physicians ahd access to conventional means and methods. 

  • The team presented each participant with a set of five real patient cases, then enlisted a panel of doctors to score the resulting written responses that detailed how each doctor (or chatbot) would handle the situation. 

The findings: The paper’s big finding was that physicians using the language model scored “significantly higher” compared to those using conventional methods. The difference between the LLM on its own and the LLM-assisted physician group was negligible. 

It’s not clear what caused the difference, if the LLMs induced more thoughtful responses from the doctors they were paired with, or if the LLMs were producing chains of thought that the doctors hadn’t considered. 

This doesn’t mean patients should skip the doctor and go straight to chatbots. Don’t do that,” Chen said in a statement. “There’s a lot of good information out there, but there’s also bad information. The skill we all have to develop is discerning what’s credible and what’s not right. That’s more important now than ever.”

The challenge (& the ethics): Still, there’s a reason LLM use isn’t yet widespread in the medical field. Or rather, a few reasons. Chief among them involves algorithmic bias and hallucination; incorrect, unverifiable output whose origins can’t be properly traced can present doctors with false information, a pretty major problem if doctors begin to build up an overreliance on these fundamentally flawed systems. 

  • There are also issues here of data privacy — in order for these models to do what’s being described here, they need access to a trove of personal patient data, a critically risky maneuver. 

  • This is relatively in line with a survey of doctors published in July by Elsevier, which found a low rate of adoption, bounded by an impression from doctors that the use of AI can amplify misinformation, cause overreliance and “erode human critical thinking.”

Still, those same doctors were pretty excited about the potential for AI to aid hospitals and improve patient outcomes, and many expect to adopt the tech within the next few years. 

“This is one of the tensions in AI that on the one hand, it's an incredible tool if you're knowledgeable. I think it could be a suspicious tool if you're not; if you're inexperienced, you don't know when to call BS on it,” Rhett Alden, Elsevier’s CTO of Health Markets, told me at the time. 

On the same day that we talked about the erosion of critical thinking due to AI, we’re talking about its possible performance in healthcare. 

One should not ignore the existence of the other. 

To that, I would add that computers cannot be held accountable for their errors; humans can. In an environment as critical as healthcare, we must be cautious that systems can be verifiably trusted, that using systems won’t erode vital medical skills in the process and that the systems promise genuinely better outcomes, rather than increased efficiency. 

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 2 (Left):

  • “This one is easy — too many mosques located too close to each other. Also, the pillars of the smaller mosque look off.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on AI trade:

A third of you said that it’s a rollercoaster. 23% said the trade will survive, since the labs are building AGI. 20% think the bubble will burst, but the hyperscalers will survive it and 13% think the trade is almost dead.

Rollercoaster:

  • “The advancement of this technology will certainly ebb and flow. DeepSeek is simply one of the first "surprises" that challenge the status quo. We should both desire and expect more of these surprises in the future.”

Rollercoaster:

  • “We've seen this before in different technologies, and even in AI. Things will come and go in cycles with the successful surviving to start another fad.”

Do you find that GenAI degrades your critical thinking?

Login or Subscribe to participate in polls.

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.