• The Deep View
  • Posts
  • ⚙️ Interview: The strange case of the "AI" buzzword

⚙️ Interview: The strange case of the "AI" buzzword

Good morning. Some media companies — The New York Times, for instance — looked at OpenAI and said ‘We’ll see you in court.’ Others — Axel Springer, the FT — have signed content licensing deals instead.

News Corp (the company behind Wall Street Journal & many others) just joined the ranks of the second group, signing a five-year deal with OpenAI that, according to the Journal, is worth around $250 million.

And last week, I spoke with Vineet Jain, CEO of Egnyte, about the rampant hype in the AI industry. Read on for the full story.

In today’s newsletter: 

  • 🌎 International AI Safety Report: There’s no current way to ensure that genAI is safe

  • 📄 Altman says he didn’t know about OpenAI’s equity clause – the documents show a different story 

  • 📚 LLMs and the language of (un)intelligence 

  • 🛜 Interview: The strange case of the “AI” buzzword

International AI Safety Report: There’s no current way to ensure that genAI is safe

Photo by Yohan Cho (Unsplash).

The report — which focuses on general-purpose systems — calls out the fact that scientists don’t really know what will happen with AI. “A wide range of possible outcomes appears possible even in the near future, including both very positive and very negative outcomes, as well as anything in between.”

The highlight reel: Section 5 of the report breaks down risk management efforts, but says “existing methods are insufficient to prove that systems are safe.” 

Dig deeper: The report also found that, while scientists are beginning to explore methods of making models more robust, reliable and trustworthy, “there is currently no approach that can ensure that general-purpose AI systems will be harmless in all circumstances.”

Many of the risks highlighted in the report have to do with misuse and abuse of the tech; many of these issues, the report says, can be solved by putting more humans into the safety loop.

Altman says he didn’t know about OpenAI’s equity clause — documents show a different story

Sam Altman, OpenAI CEO (Microsoft Build, 2024).

OpenAI has been having a bit of a scandalous time of late. The startup lost safety researchers, then dissolved its alignment team, then was dealt a bunch of safety-related criticism from those departed researchers and managed to get itself into a very public fight with Scarlett Johansson.

In the midst of this, it came out that OpenAI makes every employee who departs the company sign non-disparagement & non-disclosure agreements in order to keep their vested equity. 

But Vox reporter Kelsey Piper obtained hundreds of documents from ex-employees — bearing the signatures of OpenAI executives including Altman — detailing the practice. Vox published several of these documents in a recent piece. 

  • “It seems to me that I’m being asked to sign away various rights in return for being allowed to keep my vested equity,” one ex-employee wrote in an email to OpenAI HR.

OpenAI told Vox that it is removing the non-disparagement clauses and releasing former employees from those clauses. 

Trust is a core element of responsible AI construction. There is an environment of lies, half-truths and abusive behavior that seems to surround this company, which either bodes ill for OpenAI, or for everyone else.

Together with Numbers Station

Data-driven insights aren’t just for Analytics teams.

Numbers Station is an AI-native data analytics platform that uses a multi-agent architecture to unlock the power of your datafrom sourcing, cleaning, and transforming, to getting answers and integrating with external tools to execute on insights. 

Numbers Station integrates multiple technologies like a semantic catalog that understands your business’s language, so your non-technical teams can ask high-level questions like: 

  • “What effects did the last promotion have on the business?”

  • "What were our most profitable products in January, by region?" 

  • "How do I increase sales for my least popular items?"

Get started quicklyno new infrastructure or prior AI experience required.

Redefine your analytics stack with Numbers Station. 

Skip the waitlist and schedule a demo today.

LLMs and the language of (un)intelligence

Sometimes you just have to look up.

Photo by Joshua Sortino (Unsplash).

Since it is the vessel with which we communicate our own knowledge, we tend to associate intelligence/understanding with language. This association has made it very easy for many people to see the textual output of LLMs and assume these models understand the meaning of that output (TL;DR, they do not). 

  • As Jacob Browning and Yann LeCun wrote in 2022: “Just because a machine can talk about anything, that doesn’t mean it understands what it is talking about. This is because language doesn’t exhaust knowledge; on the contrary, it is only a highly specific, and deeply limited, kind of knowledge representation.”

In a recent (prime) example of this, Google’s AI Overview suggested that a user should add “non-toxic glue” to pizza sauce to keep the cheese from sliding off. Who knows, it might even work. But I prefer my list of cheese ingredients to begin and end with mozzarella (preferably smoked if I can wrangle it). 

The response was pulled not-quite-verbatim from an 11-year-old Reddit post in something cognitive scientist Gary Marcus called “partial regurgitation.” 

LLMs ≠ AGI: A true artificial intelligence, Marcus said, would relate this ‘recipe’ to human biology, psychology & other pizza recipes and would know that glue does not an edible pie make. 

  • “Partial regurgitation, no matter how fluent, does not, and will not ever, constitute genuine comprehension,” he said. “Getting to real AI will require a different approach.”

💰AI Jobs Board:

  • Senior Data Engineer, Full Stack: Matterworks · United States · Hybrid; Somerville, MA · Full-time · (Apply here)

  • Senior Technology Specialist, Data & AI: Microsoft · United States · Hybrid; multiple locations · Full-time · (Apply here)

  • Research Engineer: Altera · United States · Menlo Park, CA · Full-time · (Apply here)

  📚 Interesting Reads:

  • There’s a breezy, 5-minute morning read that keeps you sharp on everything business and tech.

    • The Hustle covers stories that matter to consumers, and summarizes the fascinating rabbit holes you don’t have time to sift through yourself. 2M people get the daily in their inbox to stay informed, and have some fun. You should come through.  

🌎 The Broad View:

  • How a social media influencer-turned-politician might soon become the mayor of one of Mexico’s wealthiest cities (Rest of World).

  • An Australian biometrics firm was hacked (Wired).

  • The world’s largest oil company plans to hit net-zero emissions without reducing its oil operations. The CEO sees no contradiction between these two goals (Fortune).

*Indicates a sponsored link

Together with Enquire Pro

Enquire PRO is designed for entrepreneurs and consultants who want to make better-informed decisions, faster, leveraging AI. Think of us as the best parts of LinkedIn Premium and ChatGPT.
 
We built a network of 20,000 vetted business leaders, then used AI to connect them for matchmaking and insight gathering.

Our AI co-pilot, Ayda, is trained on their insights and can deliver a detailed, nuanced brief in seconds. When deeper context is needed, use a NetworkPulse to ask the network, or browse for the right clients, collaborators, and research partners.

Right now, Deep View readers can get Enquire PRO for just $49 for 12 months, our best offer yet. 

Click the link, sign up for the annual plan, and use code DISCOUNT49 at checkout for the AI co-pilot you deserve.

Interview: The strange case of the “AI” buzzword

Created with AI by The Deep View.

We’ve been living with AI for years. Think predictive text, social media algorithms, facial recognition, etc.

A whole new world: But when ChatGPT — a massive predictive text model —came out, people were able to interact with the “AI” (an unfortunately necessary though not wholly accurate term). And companies around the world decided they had to follow suit; thus generative AI — chatbots designed to produce creative output from text prompts — became very popular. 

Hate to burst your bubble: Many people, from scientists to venture capitalists, have begun calling out the intensity of the genAI hype cycle, with several likening it to the dot-com bubble of the late ‘90s

  • Vineet Jain, CEO of cloud collaboration platform Egnyte, sees tons of potential for AI to be transformative. But right now, he thinks it is way overhyped. 

He told me that Egnyte has been employing machine learning and AI tools for a long time. But the term “generative AI” has become such a powerful buzzword that the company has had to “literally repackage the existing stuff and throw in” a few generative AI capabilities to stay competitive. 

  • He said that in the enterprise, concerns & skepticism around data privacy, security and hallucinations, combined with the high cost of employing genAI, make AI a “losing proposition” right now.  

No boost to the bottom line: Jain expects a “zero-dollar increase” for this fiscal year from Egnyte’s explorations in generative AI. 

Image 1

Which image is real?

Login or Subscribe to participate in polls.

Image 2

  • Canyon: An all-in-one platform for job-seekers that helps you perfect your resume and track your applications.

  • RemoteBase: A marketplace that matches startups with software engineering talent.

  • Translate.video: A tool to transcribe videos into 75+ language

Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter).

*Indicates a sponsored link

SPONSOR THIS NEWSLETTER

The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more.

If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here.

One last thing👇

That's a wrap for now! We hope you enjoyed today’s newsletter :)

What did you think of today's email?

Login or Subscribe to participate in polls.

We appreciate your continued support! We'll catch you in the next edition 👋

-Ian Krietzberg, Editor-in-Chief, The Deep View