• The Deep View
  • Posts
  • ⚙️ Progress & Predictions 2025: Threats, harms and ethics

⚙️ Progress & Predictions 2025: Threats, harms and ethics

Good morning. In this second part of our special edition series, we’re breaking down the dark side of artificial intelligence, the moral and ethical challenges and questions posed by the increasing proliferation of the technology. 

The issues, as always, are manifold.

Relatedly, check out the latest episode of our podcast, an interview with Pindrop about battling audio deepfakes.

— Ian Krietzberg, Editor-in-Chief, The Deep View

The ethics of AI

Source: Created with AI by The Deep View

At the beginning of 2024, the ethics of generative artificial intelligence came into focus in a major way, centered in a dark spotlight around Taylor Swift; a series of sexually explicit, AI-generated deepfakes of the singer went viral on social media, leading to a weeks-long period in late January and early February where it became hard to avoid a positive flood of explicit, AI-generated photos of a number of female celebrities. 

Eventually, accounts were banned and the photos were removed, but the problem didn’t go away. 

Deepfake harassment: This kind of deepfake harassment wasn’t a new phenomenon; it’s been going on since at least 2017. But, back in that pre-GenAI era, it took bad actors hours or even days to produce just one deepfake image; and even then, it wasn’t all too convincing. 

According to recent research from the nonprofit Internet Matters, some 13% of teenagers say they have had experience with explicit deepfake image abuse. 

A look back at related stories we’ve done in the past year: 

In the U.S., there remains no federal legislation regarding this kind of deepfake abuse. The Defiance Act, which would allow victims to sue over the spread of nonconsensual explicit deepfakes, passed the Senate over the summer, but has yet to pass the House. Legislation regarding the matter remains patchwork at best in the states; and while the policymakers debate regulatory approaches, the problem persists, with dozens of websites specifically marketed as AI-powered ‘nudify’ tools remaining operational.

The city of San Francisco in August filed suit against 16 of those websites

The wider impacts

But the ethical impacts of deepfakes stretch far beyond image abuse and into weaponized identity hijacking that has enabled the spread of a truly concerning variety of misinformation. 

  • Even as AI-powered vocal fraud has become something of a normphone calls from AI-generated relatives asking for money, for instance — political and election-related fraud and misinformation spread throughout much of last year; this began with a deepfaked robocall of President Joe Biden that circulated in February, encouraging New Hampshire voters to not participate in the primary. As the election drew nearer, Elon Musk took to sharing AI-generated images and videos of Biden and Vice President Kamala Harris

  • Both Meta and OpenAI have detailed their efforts to shut down covert influence operations that had been attempting to utilize the generative AI products operated by the two companies; looking back at the U.S. election last year, Meta said that it prevented nearly 600,000 requests to generate images of the candidates, adding that the risks of electoral interference from generative AI “did not materialize in a significant way and that any such impact was modest and limited in scope.”

It’s not just election-related information; AI-generated books, images and articles have spread across the internet, threatening widespread information pollution that has already had dangerous impacts. 

In August, we wrote about a British family who was poisoned after using what turned out to be an AI-generated mushroom hunting guidebook; in the process, I discovered several AI-generated mushroom guidebooks listed on Amazon, which the company removed following my request for comment. 

And, pivoting away from the deepfake side of things, issues of generative AI leading to mass surveillance and algorithmic discrimination have proliferated: a November investigation uncovered discrimination baked into Denmark’s digitized social welfare program; the steady adoption of AI into already discriminatory predictive policing efforts has continued; and the adoption of generative AI by hospitals has continued, leading to many concerned nurses and patients alike. 

A look back at related stories we’ve done in the past year: 

The psychological, sociological impacts of AI companionship, meanwhile, came into sharp focus when a boy died by suicide after falling in love with a Character AI chatbot. To date, a second lawsuit has been filed against the company by the mother of a teenage boy who, she argues, became mentally ill and unstable due to a close relationship with bots on Character.

And the environmental impacts of increasing investments in generative AI, meanwhile, have steadily pushed much of the Big Tech sector further from achieving their climate goals, with the bulk of these giants — Meta, Google, Microsoft and Amazon — pursuing nuclear power specifically to meet their AI energy requirementsBut, before the nuclear comes online, AI data centers are taking advantage of fossil fuels, a situation that is leading to spiking emissions and a burgeoning public health crisis due to steadily worsening air quality.

There’s also the general cybersecurity war that’s been taking place, with cybercriminals vastly empowered by generative AI tools, which have introduced specificity, speed and scale to their operations. Cybersecurity officials have, meanwhile, adopted generative AI tools to fight back against ever-increasing attacks, even as others work to educate companies and individuals about the myriad security risks posed by AI, with AI and because of AI. 

On top of all this, there are ongoing legal battles regarding the legality of training generative AI models on copyrighted material, even as those models are beginning to be used to replace human workers, a challenging dynamic for the job market going forward. 

In sum, 2024 was a busy year of AI proliferation and integration, and the resulting moral and ethical impacts are vast, to say the least. 

The experts I spoke to expect 2025 to be worse. 

Looking ahead: Nell Watson, an author and leading AI ethicist, told me that, with the rise of the increasingly autonomous agentic AI that we talked about yesterday, “ordinary members of the public will be tasked with teaching and managing these systems, something that remains a major, uncertain challenge even for experts.”

  • “Beyond alignment issues, agentic AI systems will certainly be employed to target systems and individuals for various kinds of attack, whether its creating designer synthetic data to poison another model (possibly even hijacking it in the process),” she said. “A friendly being must try to learn, model and accommodate the preferences of others. This is another strength of agentic systems, which could learn to surprise and delight us as a good friend might. However, these same capabilities can be used to observe human foibles, to strike at an exploitable weakness at a calculated moment of greatest impact.”

  • “The transition to agentic AI represents both unprecedented opportunity and risk, and this requires an investment in thoughtful governance to harness its potential while mitigating dangers. We must not only ensure that these powerful AI systems are used in an ethical manner, but we must now also work to ensure that these systems remain safe and loyal partners instead of impish and capricious minions.”

Ajay Amlani, president of iProov, expects deepfake fraud specifically to become fully weaponized in 2025: “a wave of account takeovers and fraudulent transactions will force banking regulators worldwide to take decisive action. Led by pioneers like Thailand and Vietnam, countries will mandate the implementation of biometric verification for payment authentication,” he said. 

Amlani added that he expects a “deepfake of a Fortune 500 CEO will cause significant disruption in the financial markets.” 

  • “The fabricated video, announcing a false merger, will trigger a temporary market dip and erode investor confidence before being exposed. This incident will highlight the growing need for robust identity verification solutions to ensure the authenticity of information and maintain trust in an increasingly digital world,” he said. 

  • “This incident will serve as a catalyst, accelerating the adoption of advanced identity verification solutions in the financial industry.”

Typeform CPO Aleks Bass said that people will “get scammed at a higher rate than what we are seeing now. 

“We might start asking ourselves, ‘is AI really worth it?’ We will see more companies investing money and time into locking down experiences and adding additional security, validation and verifications to prevent some of these abuses. But the challenge will be to create nearly invisible solutions, so as not to interrupt the customer experience.”

Aleks Bass

A look back at related stories we’ve done in the past year: 

Cybersecurity firm Exabeam expects, similar to much of what we’ve already discussed, highly enhanced and empowered cybercriminals, leading to new types of attacks and increasing exploitation. 

Steve Povolny, one of Exabeam’s co-founders, said that, in 2025, “AI will democratize malware creation.”

  • “You won’t need to be a coder to create sophisticated malware in 2025 — AI will do it for you. Generative AI models trained specifically to generate malicious code will emerge in underground markets, making it possible for anyone with access to deploy ransomware, spyware and other types of malware with little effort,” he said. 

  • “Blindly trusting AI-generated outputs will become a major vulnerability for organizations. This will lead to the rise of a new cybersecurity mandate: ‘Zero Trust for AI.’ Unlike traditional Zero Trust principles, Zero Trust for AI is not a prediction for the future; it’s a concept ready for discussion now, bringing a nuanced approach to trusting AI. This framework will require organizations to verify, validate and fact-check AI outputs before allowing them to drive critical security decisions.”

Povolny thinks that video-based deepfakes will soon become “imperceptible from reality,” unleashing a “devastating new wave of social engineering attacks” that will allow “criminals to impersonate executives, forge high-stakes transactions, and extract massive payouts from unsuspecting victims. With AI making deepfakes accessible at the push of a button, the potential for financial fraud will explode, forcing organizations to rethink how they verify identity in an increasingly deceptive world.”

Harry Muncey, Senior Director of Data Science, and Responsible AI at Elsevier, expects the adoption of generative AI to continue, leading to improved confidence in “low-risk use cases” that “will lead to pressure for innovation and use in higher-risk environments, such as healthcare, where the need for robust guardrails is clearly critical.”

And Vijay Balasubramaniyan, the co-founder of CEO of Pindrop, expects deepfake attacks to “accelerate at an alarming rate” next year. 

I wholeheartedly expect — and I really hope I am wrong — that the ethical challenges related to artificial intelligence will become much more significant throughout 2025 as adoption continues, legislation lags and systems become more capable. 

A year ago, I saw no clear solution to the problem of deepfake abuse and harassment; I think we have made no progress on that front whatsoever. Instances will continue to occur at small and large scales. People will get hurt. And while I do worry about misinformation and its spread online, I am far more concerned about schools and this new, dangerous form of bullying and harassment that, without governance, oversight and regulation, will continue unabated. 

I also expect addictions to and obsessions with anthropomorphized GenAI chatbots to steadily spike as vulnerable people interact with these systems, leading to a heavy, noticeable increase in digital isolation, something that may start to become more clearly associated with mental health struggles. 

I expect we will additionally begin to see the problems of overreliance, specifically in high-risk environments. Hospitals, I think, will run into at least a few well-publicized instances regarding generative AI misuse.

As with any problem, things have to get worse before they get better — I think 2025 will be the year that these issues move off the fringe to become more mainstream; only then might action be taken against them. I do not expect that action to come soon, however. 

On the copyright front, I think we’ll see some significant movement from the big copyright infringement cases this year (Authors Guild and New York Times) that may — and that’s a very big ‘may’ — have implications for the future development of foundation models. I don’t expect mass job loss anytime soon — or at all, really — but I do think we’ll start to see a reduction in hiring due to internal reliance on AI tools. 

On the environmental front, the optimist in me hopes that the sheer cost of data center operation will drive massive innovations in energy efficiency, which will have positive impacts on sustainability. This, however, might very well be wishful thinking; the complexity and use of generative AI systems continues to grow, meaning energy efficiencies will simply free developers up to handle bigger workloads, rather than reducing their energy use. 

My areas of biggest concern involve algorithmic surveillance and discrimination, unsustainability, deepfake abuse and artificial companionship; as these issues continue to worsen — I expect them all to become hallmarks of this next year — global society will be pushed to the edge of truly radical transformation. 

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The fake sled appears to be pulling the dogs.”

Selected Image 1 (Left):

  • “Image 2 just looks fake.”

  • Nvidia, chip stocks pop after Foxconn reports record revenue (CNBC).

  • Samsung claims its Ballie AI robot will actually be released this year (The Verge).

  • US crackdown leads Chinese firms to set their sights overseas (Semafor).

  • Google plays catch-up in video ad tech (The Information).

  • Chip firms surge on hopes of strong AI-led demand (Reuters).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on agents in 2025:

47% of you think 2025 will be the year of the AI agent. The rest … not so sure.

The year of the AI Agent:

  • “And if that’s correct, he who holds the microphone will have a lot of sway over progressive adoption.”

The year of the AI Agent:

  • “Make Al more than trying to type and read my brain I want solutions and agents.”

Thoughts on the ethical issues of AI next year?

Login or Subscribe to participate in polls.