• The Deep View
  • Posts
  • ⚙️ Department of Homeland Security: ‘The world needs to be aware of the inherent risk of deepfakes’

⚙️ Department of Homeland Security: ‘The world needs to be aware of the inherent risk of deepfakes’

Good morning. Yesterday, Nvidia surpassed Apple and Microsoft to become the largest company (by market cap) in the world. It closed Tuesday with a market cap of nearly $3.4 trillion (at a forward price-to-earnings multiple of about 40 times).

In today’s newsletter: 

  • 🩺 AI for Good: Machine learning and robotics in the operating room

  • 🛳️ Report: AI can help the shipping industry slash its carbon footprint

  • 📱 AI avatars are coming to TikTok

  • 🏬 Department of Homeland Security: ‘The world needs to be aware of the inherent risk of deepfakes’

AI for Good: Machine learning and robotics in the operating room

A shot of the gen-5 da Vinci robot (Intuitive Surgical, Inc.).

Nearly 25 years ago, American surgeons gained a new type of colleague, a robotic surgical assistant called da Vinci. While not a true robot — the da Vinci system is technically a master-slave telemanipulator — the system allows surgeons to operate more precisely while seated, making long operations easier to get through. 

How it works: The da Vinci consists of three parts: A surgical station with four robotic arms, a console (with two arms) that is controlled directly by the surgeon and a monitor.

In March, the fifth generation of the robotic system launched; it comes complete with far more computing power and a bunch of new features. 

  • One of these features is called “force feedback” and enables the system to measure the amount of force exerted on tissue during surgery. Less force “may translate to less trauma on tissue.”

This latest generation is also able to deliver “objective performance insights” that surgeons can use to evaluate cases. 

Teams at Intuitive — the company behind da Vinci — are leveraging AI and machine learning technology to derive insights from two decades of usage data. 

Why it matters: These insights can help surgeons “learn not only from cases of lower complexity, but also from more complex cases that address anomalies and complications. This is all to help reduce the impact of surgical variability and deliver better, more consistent outcomes.”

AI can help the shipping industry slash its carbon footprint

Photo by Ian Taylor (Unsplash).

A new report from autonomous shipping company Orca AI — first reported by Reuters — found that the global shipping industry could reduce its carbon emissions by 47 million tonnes per year by leveraging AI. 

Key points: The study found that the application of AI within sea navigation would enable fewer route deviations and maneuvers. 

  • This would save approximately 38.2 million nautical miles per year in travel, saving each ship an average of $100,000 in fuel. 

Why it matters: In 2022, international shipping accounted for around 2% of global energy-related carbon emissions, according to the International Energy Agency (IEA). 

  • Carbon emissions from the sector have been on the rise since 2020, when they briefly declined. 

Carbon emissions from the international shipping industry over time (International Energy Agency).

Together with Brave

When it comes to developing an AI model, the phrase ‘garbage in, garbage out’ certainly applies. The Brave Search API exists to sift through that garbage, so you get the best training data out there.

The Brave Search API — an index of 20 billion web pages — can be used to assemble an ethical, human-representative dataset to train your AI models. And it’s affordable.

What makes it special?

  • It’s built on real page visits from anonymous humans, which filters out tons of junk data.

  • It’s independent and proprietary, meaning no Big Tech biases or extortionate prices.

  • It’s refreshed daily, so it’s always accurate and up-to-date.

AI avatars are coming to TikTok

Person holding an iPhone running TikTok

Photo by Solen Feyisa (Unsplash).

TikTok on Tuesday announced “TikTok Symphony,” an AI-powered “creative” suite. A part of this suite gives creators and advertisers the option to create AI-generated videos complete with custom and stock “AI avatars.” 

Details: TikTok says this is “designed to enhance and amplify human imagination, not replace it,” by handling the “heavy lifting” of making a TikTok. 

  • TikTok added that all of these videos will be labeled as AI-generated and that the stock avatars are derived from licensed, paid actors. 

But, the FTC said … The announcement comes just days after the FTC published a list of do’s and don’ts when it comes to AI. And I quote: 

  • Don’t use consumer relationships with avatars and bots for commercial manipulation.”

“A company offering an anthropomorphic service also shouldn’t manipulate people via the attachments formed with that service, such as by inducing people to pay for more services or steering them to affiliated businesses.”

TikTok did not respond to a request for comment. 

💰AI Jobs Board:

  • Founding Engineer: Lume · United States · New York · Full-time · (Apply here)

  • Machine Learning Engineer: Core Solutions, Inc. · United States · King of Prussia, PA · Full-time · (Apply here)

  • Distinguished AI Engineer: X Boson Inc. · United States · San Francisco Bay Area · Full-time · (Apply here)

 📊 Funding & New Arrivals:

🌎 The Broad View:

  • Uber and Lyft are fighting minimum wage laws. But in this state, the drivers won (Dara Kerr of NPR).

  • EV maker Fisker files for bankruptcy (Barrons).

  • Surgeon General calls for warning labels on social media (New York Times Op-Ed).

  • McDonalds kills its AI drive-thru experiment (Axios).

  • FTC finds ‘reason to believe TikTok is violating or about to violate the FTC Act and the Children’s Online Privacy Protection Act’ (FTC).

  • Ready to supercharge your career?

    • Sidebar is a leadership program where you get matched to a small peer group, have access to a tech-enabled platform, and get an expert-led curriculum. Members say it’s like having their own Personal Board of Directors.*

*Indicates a sponsored link

Together with Athyna 

“We need a front-end developer by Tuesday, but it’ll take months to find someone in the U.S.”

We use Athyna at The Deep View — and a bunch of our friends do too. If you are looking for your next remote hire, Athyna has you covered. From finance to creative, ops to engineering.

The secret weapon for ambitious startups. No search fees. No activation fees. And up to 80% off hiring locally.

Incredible talent, matched with AI precision, at lightning speed.

Department of Homeland Security: ‘The world needs to be aware of the inherent risk of deepfakes’

Hacker binary attack code. Made with Canon 5d Mark III and analog vintage lens, Leica APO Macro Elmarit-R 2.8 100mm (Year: 1993)

Photo by Markus Spiske (Unsplash).

Deepfake technology has been around for years. And some applications of it — de-aging actors in movies, for example — have been pretty innocuous (unless, of course, you were offended by the latest Indiana Jones movie). 

But the thing that’s happened in the past 18 months is that publicly accessible AI models have lowered the bar required to make and distribute deepfakes; now, you don’t need any technical skill, and it doesn’t take longer than a few seconds. What commercial, consumer-facing AI has done is supercharge the deepfake process, enabling all sorts of actors to produce highly realistic synthetic content. 

And you don’t have to look too long or hard to see the impact this has already had. 

At the core of deepfake abuse are three things:

  • Nonconsensual pornography

  • Misinformation 

  • Fraud/Identity theft

The first one gained a lot of attention in January when explicit, fake (and nonconsensual) images of Taylor Swift went viral online. But it’s an issue that has been impacting young women and girls for months; cases of high school (and even middle school) boys spreading deepfaked nudes of their female classmates continue to stack up

  • Experts, meanwhile, are particularly concerned about the misinformation side of things considering the number of global elections taking place this year. It is perhaps best encapsulated by the deepfaked robocall of President Joe Biden that circulated earlier this year, telling people not to vote in the New Hampshire primary. 

  • As the nonprofit CivAI pointed out to me, malicious actors could conceivable publish fake local news stories (designed to look real) telling voters to stay away from the polls on an election day for some fake, malicious reason. AI would allow these bad actors to do this at speed and scale.

Issues of fraud have also been accelerated by AI. This is perhaps best represented by the rise in deepfake ransom phone calls, where bad actors demand a ransom payment to free a ‘kidnapped’ family member, whose voice has been cloned with AI technology (or where bad actors pose as family members to steal money).

In a recent report, the U.S. Department of Homeland Security said that “deepfakes and the misuse of synthetic content pose a clear, present and evolving threat to the public.”

  • The Department said that deepfakes are here to stay, adding that the “world needs to be aware of the inherent risk of deepfakes by malign actors.”

Key points: Since 2018, between 90% and 95% of all deepfake videos “were primarily based on nonconsensual pornography,” according to the report. 

  • The Department said the threat is prominent for individuals, businesses, industries and nations. 

  • With this tech rapidly advancing, the Department expects attacks to continue to proliferate. 

  • The report lays out a series of scenarios (beginning on page 18) that detail the possible impacts of deepfake attacks. 

Mitigation measures: The Department said that the key to mitigation involves an interdisciplinary approach. 

  • Part of this involves the implementation of policy and law regarding deepfakes. 

  • Another part involves making “it a crime to disseminate malicious content.”

The report added that while some states have passed legislation meant to address deepfakes, federal legislation is a better avenue since each piece of state legislation varies in strength and specifics. 

“It is time for there to be a coordinated, collaborative approach,” the report reads. 

Zoom out: Google and Runway, meanwhile, each just announced massive upgrades to their video generation technology. 

My View: As one cybersecurity researcher once put it to me: If a system can be abused, it will be. It is — unfortunately — human nature. Broadly speaking, the systems that have been released into the wild were not created with that abuse in mind, making them easy to take advantage of.

Image 1

Which image is real?

Login or Subscribe to participate in polls.

Image 2

  • Brave Search API: An ethical, human-representative web dataset to train your AI models. *

  • Videotok: A tool to automate the creation of short videos from text inputs. 

  • Streamer: An AI tool for cost-effective local TV ad campaigns.

Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter).

*Indicates a sponsored link

SPONSOR THIS NEWSLETTER

The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more.

If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here.

One last thing👇

That's a wrap for now! We hope you enjoyed today’s newsletter :)

What did you think of today's email?

Login or Subscribe to participate in polls.

We appreciate your continued support! We'll catch you in the next edition 👋

-Ian Krietzberg, Editor-in-Chief, The Deep View