• The Deep View
  • Posts
  • ⚙️ Sam Altman’s copyright defense is that GenAI is basically human

⚙️ Sam Altman’s copyright defense is that GenAI is basically human

Good morning. As always, Tuesday was a busy day, filled with product announcements and an overabundance of things to just … process.

Added to that list is a new article from Princeton researchers Arvind Narayanan and Sayash Kapoor that proposes an alternative vision of AI: AI as a normal technology.

It’s radical in its un-radical-ness.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 🎙️ Podcast: AI, compliance & utopia

  • 🌊 AI for Good: Trying to speak Dolphin 

  • 💰 Report: Tariffs could ‘dent’ VC interest in AI 

  • 🏛️ Sam Altman’s copyright defense is that GenAI is basically human

🎙️ Podcast: AI, compliance & utopia: Can tech actually make the world better?

On this week’s episode of The Deep View: Conversations, I sat down with Dr. Eric Sydell, the CEO and co-founder of Vero AI, to talk about compliance, governance and oversight, plus all the different kinds of worlds AI might eventually bring about.

AI for Good: Trying to speak Dolphin

Source: Google DeepMind

For the past 40 years, the Wild Dolphin Project has been observing, recording and studying dolphins — in the wild — with the goal of “cracking the code” of dolphin communication. 

While the Project, led by Dr. Denise Herzing, has made significant progress in understanding the meaning of certain sounds and whistles, as well as the social structure within small communities of dolphins, that goal of actually being able to communicate with them has remained out of reach. 

What happened: This week, Google DeepMind unveiled DolphinGemma, a language model trained “extensively” on the decades of dolphin recordings gathered by the Project. The system, according to Google, doesn’t just analyze patterns in dolphin speech; it generates “dolphin-like sound sequences.” 

  • The model is relatively lightweight, at only 400 million parameters, and so can be run on the researchers’ smartphones. 

  • It functions as an audio-in, audio-out model; similar to the way in which language models predict the next word in a sequence, DolphinGemma is trained to process a given (dolphin) audio input to predict the likely next sound in the sequence. 

The Project is beginning to deploy the model in the field; the idea is that it will aid the researchers in identifying patterns of communication, and uncovering meaning within those patterns, that, beforehand, went unnoticed. Google said it will be released as an open model this summer. 

“You’re going to understand what priorities they have, what they talk about,” Herzing said. “The goal would be to one day ‘speak dolphin,’ and we’re trying to crack the code.” 

Design and ship your dream site with Framer. Zero code, maximum speed.

Just publish it with Framer.

Beautiful sites without code — easily. Join thousands of designers and teams using Framer to turn ideas into high-performing websites, fast. The internet is your canvas.

Framer is simple to learn, and easy to master.

Check out the Framer Fundamentals Course to build on your existing design skills to help you quickly go live in Framer. Perfect for designers transitioning from Figma or Sketch.

Get 25% off for 3 months with code: THEDEEPVIEW

Report: Tariffs could ‘dent’ VC interest in AI 

Source: Unsplash

Coming off a week of radical volatility, tech stocks have finally seemed to calm down a bit. 

Shortly after imposing a sweeping series of global tariffs, President Donald Trump pressed the ‘pause’ button, at least for the next 90 days, dropping global tariffs to a base rate of 10% (with the exception of China, against which the U.S. has imposed a 145% tariff). Then on Friday, Trump said that phones, computers and computer chips would be exempt from his “reciprical” tariffs, a cause — as tech bull Dan Ives pointed out — for celebration. 

The Trump Administration, however, has warned that a separate round of tariffs — specific, this time, to the semiconductor industry — are on the way. Earlier in April, the U.S. Commerce Department “initiated an investigation to determine the effects on the national security of imports of semiconductors,” according to a document, laying the groundwork for a tariff. 

What happened: The uncertainty is already getting to venture capital investors, according to a recent PitchBook report

  • PitchBook estimates that there were a total of around 4,000 VC deals in the first quarter of 2025, roughly on par with last year’s numbers. AI, according to PitchBook, is the exception to this “otherwise sluggish dealmaking environment” — Big Tech companies have not pulled back spending on AI, and significant deals (like OpenAI’s $40 billion round) continue to be signed. 

  • However, “while demand for AI remains strong, the impact of new tariffs on chip supply chains could dent VC appetite for these investments should companies struggle with pricing increases.”

And despite all the ongoing AI popularity and excitement, cost remains a massive challenge. Investment returns for end users and investors alike remain roughly nonexistent, considering the enormous levels of cash burn required to train and operate AI models, alongside their ongoing problems with security and reliability. So far, many top model developers have been subsidizing the true cost of use with their constant influx of VC dollars; but OpenAI and Anthropic both have recently opened up more expensive tiers (ranging from $100 to $200 per month), even as open-weights providers like DeepSeek continue to offer equivalent performance at radically reduced prices.

More broadly, PitchBook wrote that “the recent tariff announcement has significantly impacted VC activity and IPO plans.” 

Investors are subsequently adopting a “wait-and-see approach until clarity and stability return.” 

All these business headwinds — lack of tariff clarity, market volatility, cost containment, negative consumer sentiment, etc. — “pressure startups’ ability to meet investors’ growth expectations,” which “could adversely impact the exit environment, keeping a lid on dealmaking activity,” according to the report. 

  • Open(X)AI: The Verge, citing unnamed sources, reported Tuesday that OpenAI is working on building a social media platform, similar to Elon Musk’s X. It’s early stages for the project, and much remains unclear

  • Google’s image generation: Google opened access (for Gemini Advanced subscribers) to its Veo 2 video-generation model on Tuesday. Training data and energy costs remain unclear; what is clear is that Google is intent on outdoing OpenAI.

  • Anthropic is playing catch-up: Anthropic on Tuesday rolled out two new product updates: one, a “research” capability quite similar to OpenAI’s “deep resaerch,” and two, an integration with Google Workspace, which will help Claude “gain deeper insight into your work context.”

  • Fed resists pressure to rescue Treasury market (Semafor).

  • DeepSeek and chip bans have supercharged AI innovation in China (Rest of World).

  • Anthropic’s AI can now search your entire Google Workspace without you (VentureBeat).

  • U.S.′ inability to replace rare earths supply from China poses a threat to its defense, warns CSIS (CNBC).

  • Meta CEO Zuckerberg spars with FTC lawyer over meaning of emails cited in antitrust trial (AP News).

Sam Altman’s copyright defense is that GenAI is basically human 

Source: Jason Redmond / TED

On stage at Ted 2025, Chris Anderson showed a ChatGPT-generated image of a Charlie Brown cartoon.

“At first glance,” he said, “this looks like IP theft.” 

At his mention of “IP theft,” the audience broke into applause, to which OpenAI Chief Sam Altman responded by saying: “you can clap about that all you want. Enjoy.” 

He added that we “need to figure out some sort of new model around the economics of creative output,” saying that humans have been taking inspiration from other humans for a long time, so generative AI models should be allowed to do the same. It’s a common talking point that doesn’t hold much water — the human creative process is vastly more complex and nuanced, according to psychologists who study creativity, than the statistical processing that underpins generative AI. 

The question of copyright has become a central pillar in and around the AI industry. It’s not such a surprising phenomenon; researchers were only able to get the large language models (LLMs) behind generative AI to their current level of capability by training them on internet-scale corpuses of human-produced content (prominently including books, articles, artwork, photographs, music, blogs, social media posts, etc.). 

  • Again, unsurprisingly, these developers have subsequently been sued by many of the human creators behind all that training data, since the creators were not compensated — or even informed — of the use to which their work would be put. 

  • In court, the developers have consistently argued that their actions are protected by the U.S. fair use doctrine, a defense that, beyond being murky at best, is far from any sort of legal vindication. The U.S. Copyright Office, for one, has yet to weigh in on whether AI training is protected under the fair use doctrine, and none of the lawsuits have yet reached a conclusion (though many significant ones have survived developers' attempts for dismissal). 

So, developers like OpenAI and Google have taken a different tack, pushing the government to enshrine copyright exemptions for AI training, in the interest, of course, of national security.

Both Elon Musk and Jack Dorsey recently advocated for the deletion of “all IP law.”

And though IP law makes it slightly more costly (between licensing fees and legal fees) for developers to produce generative AI systems, IP law has been around for a long time, and it is clear that such legal protections foster and enable innovation, rather than the reverse. 

And, to that point, Google yesterday filed a pretrial brief in the Department of Justice’s monopoly lawsuit, writing: “absent protection for intellectual property, there exists little reason to invest in developing software.”

There are two things that Altman mentioned on that stage that I’d very much like to address: inspiration and democratization.

Let’s start with the first one. Inspiration. It is not a novel thing to suggest that generative AI models gain inspiration much the same way human creatives do. We’ve been hearing that quite loudly for at least two years.

The reality is a lot more complex.

In 2023, I embarked on a project to unpack that specific notion, speaking with experts in the fields of AI, psychology, creativity and art to discern the differences between the creative processes of both man and machine.

A couple of main points emerged. One involves randomness and generalization; researchers who study creativity have found that creativity in humans is often the result of truly random brain activity. Generative AI models, on the other hand, are bound by their training data. They struggle mightily with extrapolation beyond that training data.

That random brain activity is a demonstration of extrapolation; see, when we create, we aren’t just working to reproduce or twist a melody we heard or a brushstroke we saw — we are working to take that ‘inspiration’ and wrap it around our own experiences and our own emotions in order to make something new, something that didn’t exist ‘in the training set.’

Art, one researcher told me, is fundamentally about finding a means of expressing those inner experiences.

Algorithms don’t have inner experiences.

And to the point about democratization — humans have been making art since before we could speak, back when we were cave-dwellers, primitive and young. Generative AI does not democratize art; it does the opposite, adding in a long pipeline of expensive hardware and software that, importantly, is designed to skip the art-making process.

Pencils democratized art. The printing press democratized art. GarageBand massively democratized art.

In those innovations, we saw amplification that did not fundamentally alter the creative process; packaging and distribution just became more accessible.

But generative AI seeks to consolidate the creative business, rather than make distribution channels more accessible. Endless pipelines of cheap, passable, but instantaneous, content, coming from mass subscriptions to just a few model developers, instead of a vast and varied ecosystem where millions of creatives are paid to, more slowly, more thoughtfully, more soulfully, create.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “Something about the sky meeting the rocks at the left side of pic 2 said AI to my instinct.”

Selected Image 2 (Right):

  • “The lava flows on image 1 looked photoshopped.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!

Here’s your view on GPT-5:

18% of you think we will never see GPT-5. 19% think the idea of “4.1” is silly, and that OpenAI will just keep pushing GPT-5 back.

17% think it might come this year, but it won’t be good.

31% think it’s coming, and it’ll be awesome.

Would you be okay giving Anthropic (or any AI co) access to your personal Google Drive data?

Login or Subscribe to participate in polls.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.