• The Deep View
  • Posts
  • ⚙️ U.S. hospital teams up with Suki for an AI assistant

⚙️ U.S. hospital teams up with Suki for an AI assistant

Good morning. Today is World Elephant Day. So we’re talking about Elephants. Fortunately, this is a part of our AI for Good series.

In other news, the Olympics wrapped up on Sunday. We’ll have to wait another four years to see Olympic breaking again.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: Protecting African forest elephants

Source: IBM

The African forest elephant, the slightly smaller cousin of the African savannah elephant, is a critically endangered species. It is also vital for forest health; the seeds of the trees that make up the rainforests in which they live only germinate after passing through the elephant’s digestive system. 

The World Wildlife Federation - Germany (WWF) is teaming up with IBM to leverage artificial intelligence tools to protect these elephants. 

The details: The WWF will be using IBM’s Maximo Visual Inspection software to track elephant populations, focusing on unique features to generate more accurate population estimates. 

  • Part of the project’s goal is also to quantify the economic and environmental impact of these elephants; the WWF has estimated that one forest elephant can increase the carbon capture capacity of a given forest by around 250 acres. 

  • This is equivalent to removing from the atmosphere a year’s worth of carbon emissions from more than 2,000 cars. 

The WWF will also employ IBM’s environmental intelligence tools to track vegetation and biomass in specific areas, to better understand the movements and impacts of these elephants. 

Why it matters: “Counting African forest elephants is both difficult and costly,” WWF’s Dr. Thomas Breur said in a statement. “Being able to identify individual elephants from camera trap images with the help of AI has the potential to be a game-changer.”

He said that it will allow for better, more targeted conservation efforts.

Stop wasting time bouncing between AI chat apps like ChatGPT or Claude AI to find the perfect AI responses for your needs!

Try TypingMind – your all-in-one professional chat interface that changes the way you interact with AI models!

Why choose TypingMind?

TypingMind allows you to chat with ChatGPT, Claude, Gemini, Azure OpenAI, and more, all in one place using your API key—no monthly fee, only pay for what you use.

But that’s not all, the app also enhances your AI experience significantly with:

  • Organized conversations: keep your chats tidy with folders and tags.

  • High-quality AI outputs: leverage a customized prompt library to guide AI responses to your preferences.

  • Custom AI Agents: build your own AI agents, and integrate them with your internal data to get domain-specific answers.

  • Powerful plugins: extend AI capabilities with plugins that allow you to generate images, scrape the web, render diagrams or charts, and more.

  • Voice interaction: speak to AI using voice input and text-to-speech capabilities.

  • Privacy & security: your data will not be used for training purposes.

The best part? All AI models connected to TypingMind have real-time internet access, which ensures you get the most up-to-date information in AI responses!

If you want to get in front of an audience of 200,000+ developers, business leaders and AI enthusiasts, get in touch with us here.

The FCC has a new proposal for AI-generated robocalls

Source: Unsplash

The U.S. Federal Communications Commission this week proposed a new set of rules designed to target AI-generated robocalls. 

The details: The new rules would require robocalls to disclose when a call is generated using artificial intelligence. 

  • This builds upon the FCC’s February declaratory ruling that included AI-generated robocalls in the Telephone Consumer Protection Act (TCPA), which required call makers to obtain the express consent of the consumer before robocalling them with AI technology. 

  • The FCC said that consumer complaints about robocalls and texts are “consistently the top category of consumer complaints that we receive.” The agency is currently seeking comment on the proposal.

Some context: This comes as issues of AI-generated scam phone calls have been steadily rising, with scammers generating fake calls in the voices of victims’ relatives; experts have told me that it’s not a bad idea to set up a family password to be sure you’re speaking to the person you think you’re speaking to. 

Topview.ai: Create marketing videos with GPT-4o + AI avatars

Topview is an online AI video editor that turns your links or media assets into viral videos in one click, empowered by Youtube & Tiktok & Facebook ads library, enhance video with realistic AI avatars.

How it works:

  • Al extracts insights from 5,000,000+ viral videos on YouTube & TikTok.

  • GPT-4o learns from 5 million videos to write your best scripts.

  • And it automatically creates entire videos with AI.

  • Use realistic AI avatars for marketing videos just like Influencers.

If you want to get in front of an audience of 200,000+ developers, business leaders and AI enthusiasts, get in touch with us here.

  • Hugging Face has acquired XetHub, a development platform for ML teams.

  • Voice AI company SoundHound has acquired software startup Amelia AI.

  • Artisan's AI BDR Ava automates your entire outbound demand generation, delivering hot leads directly to your inbox on autopilot.

  • Markets just had the most volatile week since the pandemic outbreak — What we learned (CNBC).

  • The Board Games That Techies Can’t Stop Playing (The Information).

  • Activist investor who took on Apple, Exxon readies his next act (Semafor).

  • Indonesian fishermen are using a government AI tool to find their daily catch (Rest of World).

  • Inside Disney’s $60 billion plan to supercharge its theme parks and cruises (Fortune).

If you want to get in front of an audience of 200,000+ developers, business leaders and AI enthusiasts, get in touch with us here.

The OpenAI Corner: Safety & Trust

Source: OpenAI

OpenAI last week published its GPT-4o System Card, a document that outlines OpenAI’s safety testing and evaluation process; the company found that GPT-4o poses “medium” risk, and so can be released. 

The details: OpenAI said that in the categories of model autonomy, cybersecurity and biological threats, GPT-4o scored low. But in “persuasion,” it scored a medium. 

  • The card goes on to detail OpenAI’s extensive red-teaming efforts, as well as the company’s thought-process behind releasing the model. It comes amid an ever-strengthening storm of safety-related criticism that has battered the company for months. 

At around the same time, U.S. Sen. Elizabeth Warren (D-MA) and Rep. Lori Trahan (D-MA sent a letter to OpenAI seeking answers regarding whistleblower protections and safety practices at the company. 

This was first reported by The Verge

  • The theme of the letter — which you can read here — revolves around the idea that “OpenAI’s leadership has prioritized profits over safety.” 

  • “OpenAI’s board members and employees have repeatedly warned that OpenAI has sacrificed safety in the pursuit of profit,” they wrote. “Given the discrepancy between your public comments and reports of OpenAI’s actions, we request information about OpenAI’s whistleblower and conflict of interest protections in order to understand whether federal intervention may be necessary.”

OpenAI has until Aug. 22 to respond to the request for information.

U.S. hospital teams up with Suki for an AI assistant

Source: Suki AI

Ascension Saint Thomas, a leading hospital in Tennessee, last week announced a partnership with Suki to integrate the company’s AI voice assistants into the hospital’s workflow. 

The details: Suki’s main product is the Suki Assistant, a genAI-powered system that automatically creates clinical documentation by ambiently listening in on clinician-patient conversations. Suki has said that its system helps doctors complete their notes 72% faster on average. 

The intention is to lift the administrative burden from clinicians, something that can both improve quality of care for patients and quality of life for clinicians, whose leading cause of burnout involves administrative burdens, according to one recent study

  • The partnership with Ascension Saint Thomas will provide the hospital’s second and third-year residents, plus all of its clinicians, access to Suki Assistant. 

  • "Suki has completely changed how I care for patients," Dr. Missy Scalise, Ascension’s Chair of clinician well-being, said in a statement. "It has cut down on my documentation time dramatically, saving me hours per week, but most importantly, it allows me to focus more on my patients and less on the computer during visits. 

“All of this combined has improved my well-being and given me more time with my family,” she added. 

Considering that this involves the deployment of generative AI, there is necessarily a high bar for reliability, trust, data privacy and security. 

So I asked Suki how it handles these factors. 

The extra info: In terms of reliability, the notes that Suki generates begin as suggestions that must be approved by the clinician to appear in the final note. Plus, Suki told me that the company has developed an “extensive clinical evaluation framework to assess the quality of LLM output.”

  • Recordings of conversations are stored in a “secure cloud environment” for seven days before being deleted; patients have the option to opt out if they don’t want Suki to record their visit. 

  • The company said that it “uses industry-leading security measures to ensure the authenticity, integrity and privacy of data, both at rest and in transit,” adding that Suki is SOC2 Type2 certified and HIPAA compliant.

Bel Srikanth, Suki’s VP of product and engineering at Suki, told me recently that when it comes to bias and hallucination, Suki doesn’t have the same problems as other popular chatbots, since it is “not taking the burden of providing general knowledge answers in the field of medicine.” 

Suki instead is focused on streamlining patient-specific data, a task that still requires plenty of monitoring and humans in the loop, but a task that he said poses fewer risks. 

Still, the integration of generative AI into the medical field in this manner raises — or should raise — the barrier of expectation for the training and education of anyone who will be interacting with these machines. 

  • My greatest concern — above issues of privacy and data security — with something like this is that it might result in over-reliance on a broader technology (genAI) that isn’t trustworthy, and, because of its energy intensity, isn’t a good option for widespread deployment.

I hope this acts as a good solution to a problem that doesn’t have any other options. 

I just hope it is handled with cautious care, and I hope the ethical issues of widespread deployment of such tools (carbon emissions, energy usage, abusive training practices, etc.) are not overlooked on the way.

Which image is real?

Login or Subscribe to participate in polls.

A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on playing Table Tennis with a robot:

A third of you said you’d be down; a third said you wouldn’t. The rest were pretty undecided.

Meh:

  • “Isn't the point to socialize with a person while competing/playing a sport?”

What do you think about the idea of Suki?

Login or Subscribe to participate in polls.