• The Deep View
  • Posts
  • ⚙️ Language and thought are not the same thing

⚙️ Language and thought are not the same thing

Good morning. Yesterday, OpenAI signed yet another media licensing deal, this time with Time magazine. If my math is right, this marks OpenAI’s eighth media deal (Vox, News Corp, The Atlantic, the FT, Dotdash Meredith, Axel Springer and the Associated Press).

An equally long list of media companies have opted to sue OpenAI instead.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: Scientists are listening for ocean health

Source: NOAA

In 2022, a team of scientists at the National Oceanic and Atmospheric Administration (NOAA) completed a four-year research project whose goal was to listen to the ocean. The scientists likened the approach to the way a doctor listens to your heart with a stethoscope; using sound, they were able to identify the ‘normal’ sounds of each environment, which allowed them to detect any abnormalities. 

The project involved a quantity of data that required the scientists to leverage artificial intelligence and machine learning. 

The details: The team planted sound recording devices at 31 marine sanctuaries across the country. The project collected a total of more than 300 terabytes of sonic data. 

  • The team then applied AI to sift through that data, identifying normal trends and patterns in each ecosystem (and flagging abnormal sounds). 

Why it matters: This automated measurement can help scientists better understand, track and respond to marine health. Changes in sound can indicate problems which can lead, in turn, to solutions. 

AI making website creation easy

Transform your ideas into stunning websites within minutes with Mixo — your go-to for effortless landing pages, business events, app launches and waiting lists.

With its AI-powered tooling, you can build and launch beautiful landing pages at lightning speed!

OpenAI is making more money from its chatbots than Microsoft 

Source: Unsplash

Competition was baked into the terms of Microsoft’s rather strange partnership with OpenAI. 

When the software giant first invested in OpenAI in 2019, their agreement was to sell access to OpenAI’s models in tandem, sharing the revenue. But the terms of that deal changed when Microsoft poured an extra $10 billion into OpenAI’s coffers at the end of 2022; now, they would compete for customers, with Microsoft selling access to OpenAI’s models through Azure and OpenAI going direct. 

According to The Information, Microsoft has fallen behind in that race. 

  • OpenAI in March reached $1 billion in annualized revenue from selling model access.

  • Microsoft, through its Azure OpenAI Service offering, only recently hit that mark. 

  • OpenAI’s success reportedly led managers at Microsoft to cut prices for Azure OpenAI Service to get customers before OpenAI can win them over.

Microsoft executives expect its Azure OpenAI Service to reach $2 billion in annualized revenue a year from now, according to The Information.

Still, OpenAI pays Microsoft for its Azure servers and Microsoft gets a 20% commission on all API deals. 

It remains unclear, however, how much Microsoft is earning from Copilot, and how much OpenAI is earning from its ChatGPT for enterprise offering. 

New York City, USA

Friday, July 12th, 2024

Use code IMPACTAITODAY for $200 off VIP Passes before the end of today*

  • Want to adopt GenAI for software development but don’t know where to start? This buyer's guide helps engineering leaders cut through the noise*

  • Perplexity’s Origin Story: Scraping Twitter With Fake Academic Accounts (404 Media).

  • OpenAI trained a model to catch ChatGPT’s coding mistakes (OpenAI).

  • AI pioneer Illia Polosukhin, one of Google’s ‘Transformer 8,’ wants to democratize artificial intelligence (CNBC).

  • The Center for Investigative Reporting Sues OpenAI, Microsoft for Copyright Violations (Reveal).

Paper: AI’s Two truths and a lie

Source: Created with AI by The Deep View

Data lies at the heart of artificial intelligence. 

That need for more and better data has led to an increase in the data scraping that has become a hallmark of the internet; Twitter now trains its AI on your tweets, Meta trains its AI on your Facebook and Instagram posts, Google on your Reddit posts, everyone on your blogs and articles, etc.  

In a recent paper, Boston University law professor Woodrow Hartzog argues that this process, beyond being exploitative, will only get worse. Framing the industry through the lens of “two truths and a lie,” Hartzog lays out a regulatory framework designed with the consumer in mind. 

The first truth: The “primary certainty” of AI is that the commercial actors behind it will “take everything they can from us.” 

  • The industry needs to consume data, and he argues that it will exploit consumers to get access to that data. 

  • “Industry’s drive to extract everything from us is buoyed by narratives that depict AI as inevitable and technological innovation as inherently beneficial.”

The second truth: “We will get used to it.” 

  • Hartzog argues that after small initial backlashes, people will become so desensitized to privacy violations and increased surveillance that they no longer care. “Because it happens incrementally, we are on track to tolerate everything,” he said. 

  • We are already seeing this happen; a majority of Americans are skeptical that there’s anything they can do to truly ensure data privacy, according to PEW

The lie: That “this will be done for our benefit.”

  • The vision of AI offered by its developers is vast, both on the positive and negative scale (literal utopia or an end to human civilization). The intent of this, Hartzog argues, is marketing designed to heighten power dynamics. 

  • “The pursuit of profit over people is why many new AI tools feel like a solution in search of a problem,” he said. “AI tools might benefit us, but they will not be created for our collective benefit.

He proposed a four-part regulatory framework as a way forward: Duties to prioritize people over profit, Defaults to limit data collection, Design Rules to ensure accurate representation of AI tools and Data Dead Ends, which would provide a “backstop to resist normalization of exploitation.” 

Learn AI like it’s 2024

Ready to become a master of AI in 2024?

Brilliant has made learning not just accessible, but engaging and effective. Through bite-sized, interactive lessons, you can master concepts in AI, programming, data analysis, and math in just minutes a day—whenever, wherever.

  • Learn 6x more effectively with interactive lessons

  • Compete against yourself and others with Streaks and Leagues

  • Explore thousands of lessons on topics from AI to going viral on Twitter

  • Understand the concepts powering LLMs like ChatGPT to quantum computers

Unlike traditional courses, Brilliant offers hands-on problem solving that makes complex concepts stick, turning learning into a fun and competitive game.

Join 10 million other proactive learners around the world and start your 30-day free trial today.

Plus, readers of The Deep View get a special 20% off a premium annual subscription.

Language and thought are not the same thing

Source: Waymo

We’ve talked before about the idea that language is not indicative of intelligence. At the time, this was in the context of Google’s messy rollout of AI Overviews, but the challenge was and remains the same: We tend to associate language with thought/intelligence, because language is how we communicate our own thoughts. 

In the world of AI, however, where some developers are devoted to the idea of building an artificial general intelligence, and where many more discuss AI models in human terms (aka anthropomorphizing), that distinction becomes much more important. 

New research — published this month by Nature — found that language is likely nothing more than a (very powerful) tool of communication. 

Let’s get into it: McGovern Institute neuroscientist Evelina Fedorenko and a team of researchers were able to use functional MRI to identify individuals’ language processing networks. Once found, they were able to monitor the brain activity of that network during a range of different activities (from solving a sudoku puzzle to thinking about people’s beliefs). 

  • While people — perhaps because of their inner voice — tend to believe that “language is the medium of thought,” the researchers found that “your language system is basically silent when you do all sorts of thinking.”

  • This is in line with people who have lost the ability to speak or process language due to injury or illness, yet can still think and plan without a problem. 

Language, Fedorenko argues, “plausibly co-evolved with our thinking and reasoning capacities, and only reflects, rather than gives rise to, the signature sophistication of human cognition.” 

The implications of this research in the AI world are clear, but in the words of Cognitive Resonance founder Ben Riley: “Human lives are enriched by language — deeply enriched — but they are not defined by language. Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, move about the world, our range of what we can experience and think about remains vast.”

  • “Take away language from a large language model … and you are left with literally nothing.”

A number of researchers, including Yann LeCun, Gary Marcus, Chomba Bupe and Melanie Mitchell, have said much the same thing. 

If there is one thing worth taking away from this proliferation of chatbots and generative AI, it is a deepening appreciation of the elegance of the human mind. We might not understand how our minds work, but we do know that our minds are graceful, powerful organic constructions; the very fact that we’ve developed tools of communication as sophisticated as language (and that we’ve been figuring out ways to communicate since the beginning) feels far more magical to me than the statistically correlated output produced by genAI. 

  • This also raises the bar of what might constitute a “thinking” machine, as the ability to speak is clearly not indicative of cognition; it is merely an illusion of it.  

The consideration that this raises — that AI is a tool for communication not fueled by thought — calls into question the very purpose of that communication. This is perhaps a bit of empirical evidence behind the calls you might see on social media from consumers who will not read AI-generated work, because if someone couldn’t be bothered to write it, why should they be bothered to read it.

A relevant component of this conversation is the origin of the term artificial intelligence — thus far a misnomer — which was coined decades ago by John McCarthy, who said in 1973 that he invented the term “because we had to do something when we were trying to get money for a summer study in 1956.” 

But that’s a story for another time.

Which image is real?

Login or Subscribe to participate in polls.

A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Your view on autonomous vehicles:

Around 30% of you said you “absolutely” would ride in an AV, but a significant number just weren’t sure. For around 10% of you, you took a ride in an AV and have no interest in doing so again.

Unsure:

  • “There should be (a) significant amount of public trust in this tech before I become an adopter. Also, it should provide the (passengers) means to take full control in case of unexpected situations. However, this is prone to abuse and will probably limited or be decided by the software rather than the human in the car. If so I prefer not to be that human.”

Absolutely:

  • “I really can't imagine a robotaxi would be any scarier than some of the human drivers I've encountered. I'd try it at least once to say I did it. ”

How do you feel about AI-generated content?

Are you less interested in reading/watching something that you know was made with genAI?

Login or Subscribe to participate in polls.