• The Deep View
  • Posts
  • ⚙️ Study: Most people can’t identify deepfakes

⚙️ Study: Most people can’t identify deepfakes

Good morning. After reiterating that OpenAI is ‘not for sale,’ CEO Sam Altman said that Elon Musk is probably not a “happy person … Probably his whole life is from a position of insecurity. I feel for the guy.”

Musk just submitted a $97.4 billion bid to buy the nonprofit that controls OpenAI, a move that could complicate OpenAI’s pending conversion to a for-profit enterprise.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 🌊 AI for Good: Whale tracking 

  • 💰 EU funnels $200 billion into AI, says the race is ‘far from over’ 

  • 🏛️ Judge rules against AI’s ‘fair use’ defense 

  • 🚨 Study: Most people can’t identify deepfakes

🎙️ Podcast

The latest episode of The Deep View: Conversations — a wide-ranging exploration of the ethics of AI with the Director of the Markulla Center’s Internet Ethics program — is out! Check it out below: 

AI for Good: Whale tracking 

Source: Project CETI

The Project CETI initiative is guided by one main goal: to understand whale sounds. A massive undertaking employing legions of linguists, machine learning specialists, natural language processing experts and marine biologists, the project — like many entrants in the AI field — is limited by the data it can gather. 

What happened: In an effort to enhance its whale sounds database, a team of Project CETI researchers developed a new machine learning framework that leverages sensors and autonomous drones to predict where whales will surface. 

  • A number of data points — measurements from autonomous drones, underwater sensors, existing whale tags and whale motion models from previous studies — were combined and fed into the new algorithm, which is designed to maximize rendezvous opportunities with whales. 

  • Imagine a rideshare app, but instead of cars and people, you have whales and sensor-laden drones. 

Why it matters: Beyond advancing CETI’s efforts at the data collection necessary to better understand the meaning behind whale calls, such an algorithm also has conservation implications, as it can be leveraged to “help ships avoid striking whales while at the surface.”

Midnight Deadline: Hours Left to Invest in The Biggest Disruption to IP Since Disney 🚀

Today’s the last day for investors to get a piece of some of the biggest names in entertainment.

Elf Labs secured 100+ historic trademarks for legendary characters like Cinderella & Snow White, and are giving investors a rare opportunity to tap into the $2T entertainment & merchandising market.

Not only have they signed new global licensing deals across toys, apparel, and food—they’re also using patented AI, AR & VR tech to bring these characters to life on a revolutionary new platform. Think next-gen immersive entertainment and AI powered talking toys. 

But here’s the catchonly limited shares remain.

Cinderella’s magic ended at midnight—don’t let yoursInvest Before Midnight.

EU funnels $200 billion into AI, says the race is ‘far from over’ 

Source: European Union

The news: The European Union on Tuesday launched what it is calling “Invest AI,” an initiative to funnel some 200 billion euros ($206.5 billion) into a union-wide artificial intelligence build-out. 

The details: Part of the money will be allocated to the construction of four new AI “gigafactories,” which will be used to train “the most complex, very large, AI models.” Each of these data centers will feature racks of roughly 100,000 state-of-the-art AI chips, a push that alone could cost north of $12 billion. 

  • These factories will represent a massive public-private partnership, whose goal is to develop openly-accessible systems designed to serve “complex” and “mission-critical” applications. “The goal is that every company, not only the biggest players, can access large-scale computing power to build the future.”

  • Part of the push involves a massive campaign to boost AI literacy and education, encourage investment, develop “common European data spaces” to help out developers and explore novel applications of the tech, including healthcare and robotics. 

“We want Europe to be one of the leading AI continents. And this means embracing a way of life where AI is everywhere,” EU Commission President Ursula von der Leyen said in a speech at the summit. “Too often, I hear that Europe is late to the race – while the US and China have already gotten ahead. I disagree. Because the AI race is far from over.”

This shortly follows the Commission’s announcement of a new project to build a series of state-of-the-art, open-source LLMs. 

The landscape: The EU has been roundly criticized for its regulatory approach to AI, which, though contradictory, somewhat vague and full of caveats, represents the clearest, strictest set of country-wide laws regarding the technology. But von der Leyen acknowledged that the Commission will “have to cut red tape — and we will.” 

The Paris Summit concluded with a vague, heavily criticized statement on the globally cooperative development of ethical AI, a statement that the U.S. and U.K. refused to sign. 

Webinar: GenAI Use Cases and Challenges in Healthcare - Fiddler x Mount Sinai

Register to learn:

  • AI-driven healthcare use cases, from AI-powered diagnostics and risk prediction to document summarization, information extraction, and internal clinical chatbots.

  • Key risks including bias in predictive applications and data privacy concerns related to PII and PHI.

  • Best practices for responsible GenAI deployment, ensuring transparency, and regulatory adherence.

  • Awesomic is a subscription-based app that connects companies with vetted designers, developers, and other specialists—ready to start immediately.

    • Start with one specialist and seamlessly switch between different experts as your project evolves—from UI/UX designers to developers, from copywriters to marketers. With the new Super Plan, you get a full team of senior experts for just $3,000 per month. Hire Smarter with Super Plan.*

  • The biggest breakthrough in next-gen tech since iPhone: Today is your last chance to invest.

    • Historic opportunity: Elf Labs, with rights to Cinderella, Snow White, Sleeping Beauty, and The Little Mermaid, is transforming the $2 trillion media industry with patented, cutting-edge tech. Lock in your stake at $2/share now—before the game-changing product launch and valuation update drive prices up.*

  • Anduril to take over Microsoft’s U.S. Army $22-billion headset program (CNBC).

  • Sam Altman dismisses Musk bid to buy OpenAI in letter to staff (Wired).

  • FTC finalizes order with DoNotPay that prohibits deceptive ‘AI lawyer’ claims (FTC).

  • Apple partners with Alibaba to develop AI features for iPhone users in China (The Information).

  • What $200 of ChatGPT is really worth (The Verge).

Judge rules against AI’s ‘fair use’ defense

Source: Unsplash

The news: Thomson Reuters successfully convinced a judge that the ‘fair use’ defense does not protect the AI-powered ingestion and regurgitation of copyrighted material. 

  • The lawsuit, filed by Thomson Reuters in 2020 against rising legal research competitor Ross Intelligence, Inc., accused Ross of training its AI model for legal search on Reuters’ copyrighted legal content. 

  • The ‘fair use’ defense — which has been cited by just about every AI developer currently embroiled in legal challenges — has four main pillars that must be examined by judges: the purpose of the use, the nature of the copied material, the amount of the material used and the effect of the copying on the value of the original work. The first and fourth factors are the most important ones. 

Judge Stephanos Bibas gave the first and fourth factors to Thomson Reuters, and the second and third to Ross Intelligence. Weighing the value of the factors, Biba rejected Ross’s defense. 

The impact: This is the first clear-cut ruling we have yet had on whether the ‘fair use’ defense holds any water.

Randy McCarthy, a U.S. patent attorney from the national law firm Hall Estill., called the case “notable, but also relatively narrow because it does not deal with generative AI,” meaning it’s unclear how it will impact all the other ongoing AI-related copyright lawsuits.

“One thing is clear (at least in this case), merely using copyrighted material as training data to an AI cannot be said to be fair use per se,” McCarthy said.

Study: Most people can’t identify deepfakes 

Source: Created with AI by The Deep View

The news: A new study out of biometric identity solutions provider iProov found that an overwhelming majority of people are incapable of accurately distinguishing real images and videos from AI-generated deepfakes. And it’s a problem. 

The details: The study tested a cohort of 2,000 U.S. and U.K.-based consumers, exposing them to a string of real and deepfaked content. Only 0.1% of participants were able to correctly distinguish all the real and deepfaked content. 

  • Before the study, only 22% of participants knew what deepfakes even were; still, more than 60% of them were confident in their abilities to discern reality from AI generations, an indication, according to iProov, of dangerous overconfidence. 

  • The study additionally found both that synthetic videos are significantly harder for people to detect than synthetic images, and that older populations are much more vulnerable to deepfakes. 

Deepfakes have been surging in a number of forms, particularly since early 2023, though the inception of the application goes back several years before ChatGPT went live. 

What generative AI has enabled on this front is the widespread democratization of technology that has, in turn, enabled the targeted harassment of women and girls — celebrities and high school teens alike — through a surge in nonconsensual deepfake pornography; we’ve also seen surges both in political and other forms of misinformation alongside a massive increase in fraud, spanning everything from faked video calls and stolen company assets to fake phone calls demanding ransom payments. 

And indeed, fraud numbers — especially scam-related fraud — have been spiking significantly since 2023. 

iProov identified a 704% increase in face-swapping in 2024 alone.

You can test your detection skills here

When it comes to cybersecurity, considering the entire world is built on the back of digital technology, the implications of this proliferation — and peoples’ inability to recognize it — are significant. 

The solution to the problem involves policy and regulation, at least in part. But it also involves a broad shift in mindset to a place of zero-trust in digital technologies, alongside the creation of technological shields. 

iProov — which you might encounter if you ever sign in to IRS.gov — exists to provide that third prong in the form of visual, digital biometric verification. An iProov executive told me late last year that iProov’s verification solution is based around a series of randomized, different-colored lights playing across the face of the user. 

  • It then runs algorithms that have been trained to detect the reflection in human skin of that randomized sequence of colors, which allows iProov to verify that there is, indeed, a live human in front of the camera. 

  • The randomized sequence of colors prevents bad actors from attacking a system, since they can’t predict what the sequence of lights might be (and since deepfaked images don’t reflect light the way human flesh does). 

The push for biometrics is widespread. Apple’s Face ID is perhaps the best example of this, and, according to Apple, Face ID isn’t actually that vulnerable. It uses infrared lasers to map the geometry of your face, meaning a 2D photograph or image can’t fool it. 

But the executive told me that “the protections against deepfakes are still not necessarily to the point that they need to be without a company like iProov, and that's where we come in.” 

The basis of most, if not all, issues relevant to artificial intelligence do really come down to a lack of literacy around the tech and its impacts. While there are measures that ought to be taken by cybersecurity companies and policymakers alike, ordinary people still need to adjust to an environment where you cannot, by default, trust anything you see or hear digitally.

Unfortunately, people are collectively probably going to have to go through quite a bit of hurt before that lesson really sinks in.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “Cloud reflections made sense.”

Selected Image 2 (Right):

  • “Image 1 has a strange-looking cave in the rock that doesn't seem real.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on AI and critical thinking:

35% of you think GenAI actually stimulates critical thinking, rather than the opposite.

22% think it does degrade their critical thinking, and it’s a problem. The rest aren’t sure.

Yeah, it’s bad:

  • “I work as a research librarian at a university. I can't tell you how many students come to ask me for resources that don't exist. Some will even argue with me because ‘ChatGPT told me.’ In general, I've also found that students that I know use AI more frequently struggle to read, analyze, and write about research papers.”

Something else:

  • “While "it depends" isn't always a clear response, it works here. If we simply rely on the GenAI output, we deprive ourselves of the joy of critical thinking. Rather, if we use GenAI as an input, it can enhance how we think about an issue - adding to our process of thinking critically.”

How good are you at detecting deepfakes of people?

Login or Subscribe to participate in polls.

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.