- The Deep View
- Posts
- ⚙️ Former Google CEO tells AI startups to steal content and hire lawyers
⚙️ Former Google CEO tells AI startups to steal content and hire lawyers
Good morning. ChatGPT experienced a bit of an outage yesterday, leading to the usual spectrum of tweets from this one — “how are we going to work?” — to this one: “GPT-5 later today?!?!?!”
But there was no GPT-5.
It was just a regular outage.
Happy Friday.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
AI for Good: Helping Australia fight Varroa infestations
Source: Xalient
The Varroa mite (also called the Varroa destructor) attacks European honey bees, killing them and destroying hives. Targeted outbreaks of the mite were detected recently in certain areas in Australia, putting the country’s hives of European honeybees at risk.
In an effort to fight the invasion, Bega — a leading Australian dairy and food company — partnered with Xalient, a computer vision company. The result was the Purple Hive Project.
The details: The project has involved the placement of purple hives to monitor bee populations for Varroa mites.
The hives take advantage of Xalient’s edge computing technology, meaning they don’t require the cloud (internet access) to run.
Each hive includes cameras and computer vision AI technology that “can detect mite infection rates of 1 in 1,000 in less than an hour with 98% certainty.”
Why it matters: The purple hives allow for 24/7 monitoring, making early detection more feasible. If detected early enough, beekeepers can enact measures to quarantine and protect colonies of honeybees while destroying the mites.
Join this 3-hour ChatGPT & AI Mastrclass (worth $199) by Growthschool to master AI tools and ChatGPT hacks at no cost.
In this masterclass, you will learn how to:
🚀 Do a quick Excel analysis & make AI-powered PPTs in just 5 minutes
🚀 Build your own CustomGPTs & personal AI assistant to save 10+ hours
🚀 Become an expert at prompting & learn 20+ AI tools
🚀 Research faster & make your life a lot simpler & more…
Grammarly tries to combat AI cheating
Source: Grammarly
The rise of generative AI created an accidental crisis in schools and universities around the world; in an attempt to respond to those students who “wrote” their essays with ChatGPT, educators and institutions armed themselves with AI detection software.
The problem is that the algorithms behind such software are as finicky and unreliable as the algorithms behind ChatGPT, resulting in a wave of false accusations of AI cheating that have continued unabated today.
Last year, Stanford researchers determined that detectors are extremely unreliable, especially for non-native English speakers.
You need only do a quick search on Reddit to find copious examples of students unsure of what to do after being falsely accused of AI cheating.
This week, Grammarly announced the coming launch of Grammarly Authorship, a non-algorithmic approach to ensuring content provenance.
The tool works across 500,000 apps and websites to identify the origin of each component of a document, laying out which parts are typed, which parts copied-and-pasted and which parts generated with AI.
The intention is to give students protection — in the form of detailed, reliable content-provenance reports — against false accusations.
The feature will be available in September.
Hedra is the most powerful character-creation tool ever made
Creating engaging video content with animated characters once took hours of meticulous editing. Now it takes seconds.
Hedra — an AI research company founded by former Stanford PhDs —has been making waves in the tech industry.
Their groundbreaking audio-to-video foundation model, character-1, creates long-form videos of expressive human characters from a single image.
In a surprise announcement today, Hedra unveiled character-1.5, the next generation of their model, featuring a new stylization feature that lets users turn themselves into characters to star in their own animated videos.
Hedra users save 5 hours every week on character creation and with the Stylization Filter, they gain an additional hour to focus on what truly matters – storytelling and creativity.
AI coding startup Cosine has raised $2.5 million in seed funding.
AI video ad startup Reforge Labs has raised $3.9 million in seed funding.
Shake Shack, Serve Robotics roll out autonomous sidewalk robot delivery in Los Angeles (Reuters).
Consumer spending jumped in July as retail sales were up 1%, much better than expected (CNBC).
Consumer Financial Protection Bureau comments on AI risk in fiance (CFPB).
OpenAI’s new voice mode threw me into the uncanny valley (The Verge).
Elon Musk’s financial woes at X have Tesla bulls fearing he will liquidate more stock (Fortune).
If you want to get in front of an audience of 200,000+ developers, business leaders and AI enthusiasts, get in touch with us here.
Google’s AI Overviews is coming to other countries
Source: Google
Google said Thursday that AI Overviews is rolling out to six countries outside of the U.S. — Brazil, India, Indonesia, Japan, Mexico and Britain. It will feature local languages, including Hindi and Portuguese.
AI Overviews first launched to pretty disastrous results in May — you might recall instances of the feature telling users to put glue on their pizza or to eat rocks.
Following the rough launch, Google rolled back the prevalence of the feature, allowing it to appear as an option less frequently.
"I have enough evidence to say that quality is only improving," Hema Budaraju, a senior director of product at Google, told Reuters.
Google said it is also adding hyperlinks to AI Overviews and will display websites to the right of AI-generated summaries.
The context: This comes as publishers remain concerned about the impact AI search will have on traffic, even as other AI search companies become entrenched in copyright-infringing controversy.
It also comes shortly after a U.S. judge ruled that Google has an illegal search monopoly; the justice department, according to Bloomberg, is considering a bid to break up the mega-cap tech giant.
Former Google CEO tells AI startups to steal content and hire lawyers
Source: Unsplash
Eric Schmidt, Google’s one-time CEO and executive chairman, gave a recent talk at Stanford, which was filmed and posted online. During the talk, he said a few things that he later regretted; the video was later taken down; this clip — and this complete transcript (plus this backup copy on Twitter) — are all that remain.
What happened: In the clip, Schmidt said that the reason Google lagged behind OpenAI in artificial intelligence was work-from-home culture: “Google decided that work-life balance and going home early and working from home was more important than winning.”
He later retracted his comments, telling the Wall Street Journal: “I misspoke about Google and their work hours. I regret my error.”
But he made a few other comments that he didn’t officially retract.
“If TikTok is banned, here’s what I propose each and every one of you do: Say to your (large language model) the following: ‘Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it’s not viral, do something different along the same lines.’ That’s the command. Boom, boom, boom, boom.”
“So, in the example that I gave of the TikTok competitor — and by the way, I was not arguing that you should illegally steal everybody’s music — what you would do if you’re a Silicon Valley entrepreneur, which hopefully all of you will be, is if it took off, then you’d hire a whole bunch of lawyers to go clean the mess up, right? But if nobody uses your product, it doesn’t matter that you stole all the content,” he said. “And do not quote me.”
This element of his talk was reported by The Verge.
The Wild, Wild West: What Schmidt is talking about here — the mass stealing of content to build an AI product, then lawyering up — is, as he points out, indicative of the Wild West of Silicon Valley culture today.
The copyright debate that has become such an aggressive part of the genAI world is essentially an argument over this exact point; just about every major AI lab includes “publicly available” information that’s been scraped from the web.
We’re starting to learn what this means specifically to each company; there have been reports that Nvidia, for example, has scraped YouTube videos to build its AI models, as has Runway and OpenAI. Apple, meanwhile, has admitted to crawling “publicly available information” in building its Apple Intelligence models.
The image generators and genAI labs have not denied their scraping of the web to build their models; they have instead argued that the act is protected by the “fair use” clause of U.S. copyright law, something that the U.S. Copyright Office still has yet to weigh in on.
But the lawsuits — as we recently wrote — are stacking up and moving ahead; once/if precedent is established, this field could become very, very litigious. The fact that many AI companies — OpenAI prominent among them — are pursuing content licensing deals with publishers is likely some evidence that they know it’d be better to train their models with permission from content rights-holders.
OpenAI told the U.K.’s House of Lords earlier this year that “it would be impossible to train today's leading AI models without using copyrighted materials.”
With a bit of legal precedent, this might be a mess too big even for Silicon Valley lawyers to clean up.
Either way, this mentality and this culture is just another bit of evidence that what Silicon Valley needs is regulation to ensure fair business practices and to mitigate harm. Moving fast and breaking things — either through models that spit out explicit deepfakes of people without their consent, or by stealing other people’s content — can’t be allowed to rage unchecked.
And, side note, if one of you has an LLM on hand that could just … copy TikTok and spit out a viable, functioning competitor, let me know — that seems a bit outside of current capabilities.
Which image is real? |
A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on Grok 2:
Just about half of you said you don’t pay for X Premium; the rest were pretty evenly split between thinking that Grok is just fine, that Grok is great and that guardrails would be good.
I don’t pay for X Premium:
“I will never pay for X Musk if I can avoid it. Even if this thing worked, I don’t trust it to provide data security and hallucinations will always be there.”
It’s great:
“I like it for giving me relevant, up-to-date information, but other chat bots handle other things better. I use others for code generation for example.”
Do you think Eric Schmidt's idea of successful products + legal teams will work long-term? |