- The Deep View
- Posts
- ⚙️ Family poisoned after using AI-generated mushroom hunting book
⚙️ Family poisoned after using AI-generated mushroom hunting book
Good morning. Nvidia was sued last week by YouTuber David Milette (who recently sued OpenAI). The claim here is not of copyright infringement, but of unjust enrichment and competition.
The timing pretty closely follows a 404 Media investigation that revealed the extent to which Nvidia trained its models on YouTube videos without permission, consent or compensation.
Anyway, I hope that wherever you are, you’re enjoying the summer while it’s still here.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
AI for Good: Restoring speech to ALS patients
Source: UC Davis Health
One of the biggest promises of AI tech is in how it relates to brain-computer interface (BCI) devices. BCI combines hardware (implanted in the brain) and software that uses artificial intelligence and machine learning to decode brain signals, expressing them digitally.
One of the arenas that BCI aims to conquer involves using these methods to restore speech to those patients who have lost the ability to speak.
Researchers at UC Davis Health recently pulled it off.
The details: Researchers implanted sensors in the brain of a 45-year-old man with amyotrophic lateral sclerosis (ALS) who had largely lost the ability to speak. The implant was able to decode his attempts to speak, turning those brain signals into lines of text for a computer to read.
And, loaded up with the latest text-to-speech tech, the computer’s ‘voice’ sounded like the patient’s pre-ALS voice.
The system achieved — and maintained — a 97% accuracy rate, the “highest of its kind,” according to UC Davis.
The real breakthrough here, according to the researchers, is in accurate, reliable, real-time decoding, where earlier attempts were much less capable in these areas.
“Not being able to communicate is so frustrating and demoralizing,” Casey Harrell, the patient who partook in the study, said. “It is like you are trapped. Something like this technology will help people back into life and society.”
The toughest part about onboarding new employees is teaching them how to use the software they’ll need.
Guidde makes it easy.
How it works: Guidde’s genAI-powered platform enables you to quickly create high-quality how-to guides for any software or process. And it doesn’t require any prior design or video editing experience.
With Guidde, teams can quickly and easily create personalized internal (or external) training content at scale, efficiently sharing knowledge across organizations while saving tons of time for everyone involved.
California challenges deepfake porn sites
Source: Created with AI by The Deep View
San Francisco City Attorney David Chiu last week announced a lawsuit against the owners of 16 of the most-visited nonconsensual deepfake porn websites. The suit has been filed on behalf of the People of the State of California.
The details: The suit alleges that the companies have violated state and federal laws prohibiting deepfake pornography, revenge pornography and child pornography. The company names have not been revealed in order to curb traffic to them.
“This investigation has taken us to the darkest corners of the internet, and I am absolutely horrified for the women and girls who have had to endure this exploitation,” Chiu said. “We have to be very clear that this is not innovation — this is sexual abuse.”
The widespread accessibility of genAI has enabled the harassment of women and girls — everyone from Taylor Swift to high school girls — through the cheap and easy spread of nonconsensual deepfake pornographic images.
Chiu said that the websites in question have been visited more than 200 million times in the first six months of 2024 alone; most require subscriptions for their “nudify” services, meaning that they are “profiting off of nonconsensual pornographic images of children and adults.”
I’ve written a lot about the world of deepfake nonconsensual porn. It is horrifying and patently disturbing that these sites — which are very clear about what they’re offering — have remained a Google search away for so long.
If you're looking to leverage AI in your investment strategy, you need to check out Public.
The all-in-one investing platform allows you to build a portfolio of stocks, options, bonds, crypto and more, all while incorporating the latest AI technology — for high-powered performance analysis — to help you achieve your investment goals.
Join Public, and build your primary portfolio with AI-powered insights and analysis.
AI-powered housing solution EliseAI raised $75 million — at a $1 billion(+) valuation — in Series D funding.
GenAI startup FutureAI raised $5.8 million in seed funding, announcing a new partnership with Google.
A new consumer privacy battle is underway as tech gadgets capture our brain waves (CNBC).
Nvidia sued for scraping YouTube after 404 Media investigation (404 Media).
X (Twitter) says it is closing operations in Brazil due to judge's content orders (Reuters).
Judge temporarily blocks sports streaming service owned by Disney, Fox, Warner Bros. (NBC News).
Video game actors go on strike over AI protections (Semafor).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
OpenAI shuts down covert Iranian influence operation
Source: Unsplash
OpenAI said Friday that it had banned accounts linked to a covert Iranian operation that was using ChatGPT to generate content regarding, among other things, the U.S. presidential election.
The firm said that it saw no indication that the content was seen by a large audience.
The details: The “cluster” of accounts that OpenAI banned was linked to operation Storm-2035, an Iranian network that has previously operated fake news sites meant to target U.S. voters.
The network was using ChatGPT to generate content, then sharing that content on social media. OpenAI said it identified “dozens of accounts” linked to the network on Twitter and one on Instagram, though added that none of the social posts seemed to have attracted much attention.
“We take seriously any efforts to use our services in foreign influence operations,” OpenAI said.
The context: OpenAI in May reported that it had disrupted five different covert influence operations that were using ChatGPT to generate polarizing misinformation.
Family poisoned after using AI-generated mushroom hunting book
Source: Created with AI by The Deep View
When we talk about AI-generated misinformation, it’s often in the context of political misinformation, mass-produced misinformation that’s designed to tilt people against politicians and bills, or dissuade them from voting, etc.
But as MIT’s recently published AI Risk Repository points out, the full threat of misinformation is a little broader in that false or misleading AI-generated content could pollute entire information ecosystems, leading, in some cases, to physical harm.
You probably know where I’m going with this.
What happened: A U.K.-based Reddit user last week said that their “family (was) poisoned after using (an) AI-generated mushroom identification book we bought from a major online retailer.” The book, according to the user, was described as being a way for “beginners to safely get in to picking mushrooms.”
After they got sick, they investigated the book more closely, finding plenty of evidence that it was likely AI-generated. Some of this evidence included obvious editorial errors: “In conclusion, morels are delicious mushrooms which can be consumed from August to the end of Summer. Let me know if there is anything else I can help you with."
Upon searching for the author’s name, the user also discovered that the “expert” doesn’t appear to exist online. The online store in question — which the user declined to name — has taken down the page.
“We did not know it was AI-generated when we bought it,” the user wrote. “This was not disclosed on the website!”
Welcome to the jungle: The issue of AI-generated books proliferating online has been ongoing for at least a year. A few well-known instances have involved fake novels published under the names of established authors.
But there were also instances — reported by 404 Media — of AI-generated mushroom guidebooks flooding Amazon last summer. Those books were later taken down, though it highlighted this new type of downstream AI-generated harm.
As the New York Mycological Society said at the time: “Amazon and other retail outlets have been inundated with AI foraging and identification books. Please only buy books of known authors and foragers, it can literally mean life or death.”
The danger here is clear.
We have built up default trust in plenty of areas; if someone’s written a book, chances are, they know what they’re talking about. In the pre-AI world, ardently investigating the author of a chosen book wasn’t really necessary (the publisher, at the very least, stood between an author and straight-up misinformation).
But genAI (combined with social media & self-publishing) has allowed certain actors to pass themselves off as experts in order to gain a quick buck.
GenAI makes things passable enough at first glance that there is a real risk — as that Reddit post proves — not of people following hallucinatory advice from ChatGPT, but consuming content downstream that has been dangerously polluted, unbeknownst to those involved.
This new, but necessary, world of distrust by default demands much more of society. Assumptions are no longer good enough. Everyone needs to become capable and verifiable fact-checkers.
And the reality is that most people won’t.
In some cases, people will get hurt. In others, voters may be swayed and democracy poisoned.
“I’ve said it before, I’ll keep saying it, AI isn’t going to destroy us in some Terminator-style uprising,” a former librarian wrote on Twitter in response to the Reddit post. “AI is going to destroy us by polluting the information we rely on with dangerous misinformation and bullshit which people are going to wind up believing.”
Which image is real? |
A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on ex-Google CEO Eric Schmidt’s startup advice:
Nearly half of you think his advice — essentially to steal IP to build a product and, if it’s successful, hire lawyers to clean up the mess — won’t pan out long-term. The lawsuits, you say, are coming.
Around a third of you think it’ll keep working — it has worked so far, after all.
The lawsuits are coming:
“Lawsuits are coming, and the sooner the better. Fair use was never intended to cover use of this type.”
Something else:
“I am surprised that there is no sense of morality incorporated into any aspect of these responses — is stealing OK then as long as you are not caught? Even in the jungle among the most primitive tribes, there are morals and values.”
What do you think should be done about AI-generated books? |
*Public disclosure: All investing involves the risk of loss, including loss of principal. Brokerage services for US-listed, registered securities, options and bonds in a self-directed account are offered by Public Investing, Inc., member FINRA & SIPC. Cryptocurrency trading services are offered by Bakkt Crypto Solutions, LLC (NMLS ID 1828849), which is licensed to engage in virtual currency business activity by the NYSDFS. Cryptocurrency is highly speculative, involves a high degree of risk, and has the potential for loss of the entire amount of an investment. Cryptocurrency holdings are not protected by the FDIC or SIPC.
Alpha is an experiment brought to you by Public Holdings, Inc. (“Public”). Alpha is an AI research tool powered by GPT-4, a generative large language model. Alpha is experimental technology and may give inaccurate or inappropriate responses. Output from Alpha should not be construed as investment research or recommendations, and should not serve as the basis for any investment decision. All Alpha output is provided “as is.” Public makes no representations or warranties with respect to the accuracy, completeness, quality, timeliness, or any other characteristic of such output. Your use of Alpha output is at your sole risk. Please independently evaluate and verify the accuracy of any such output for your own use case.