- The Deep View
- Posts
- ⚙️ The FTC’s AI crackdown
⚙️ The FTC’s AI crackdown
Good morning, and happy Friday. This week, Mark Zuckerberg and Meta unveiled Orion, a prototype model of the company’s AR glasses designed to merge the virtual and physical world in one seamless interface.
But that all got drowned out by the leadership chaos (again) at OpenAI. Now, Sam Altman is all that’s left … this has certainly not made me trust OpenAI more.
Didn’t think I could trust them any less, but here we are.
In today’s newsletter:
— Ian Krietzberg, Editor in Chief, The Deep View
MBZUAI Research: AI-generated text detection
Illustrator: Ben Hickey
Shortly after ChatGPT — and the ever-lengthening list of similar GenAI chatbots — demonstrated to people (and students) around the world their apparent ability to spit out essays in seconds, came the technological response: GenAI text detectors.
The problem with these generators — as we’ve reported — is that they are just as unreliable as the systems generating the text in the first place. Instances of false accusations of GenAI-powered cheating have spiked in high schools and colleges across the globe.
Recent research from the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) adds more credence to the idea that AI-powered text detection isn’t a reliable method.
The details: The researchers presented a large-scale multi-generator, multi-domain and multilingual dataset — dubbed “M4” — for machine-generated text detection.
They tested seven different detectors on the dataset, finding that the systems struggle to differentiate between AI-generated and human-written “if the texts come from a domain, a generator, or a language that the model has not seen during training.”
Noting the rate of LLM improvement — which will continue to make detection more challenging — the researchers said that their results “show that the problem is far from solved and that there is a lot of room for improvement.”
Why it matters: This adds weight to two separate ideas in the realm of AI. One, that LLMs are nothing more than predictive text generators incapable of reasoning; here, the fact that the systems struggle when the domain isn’t included in the training data is telling. And two, that such detectors should not be used as an absolute indicator of machine-generated text in any scenario, especially in schools.
To learn more about MBZUAI’s research visit their website.
Boost your software development skills with generative AI. Learn to write code faster, improve quality, and join 77% of learners who have reported career benefits including new skills, increased pay, and new job opportunities. Perfect for developers at all levels. Enroll now.*
Arcade raises a $17m round for AI product creation
Prepared raises a $27m Series B to create AI for 911 calls (Tech Crunch).
Convergence Labs raises a $12m Pre-Seed to build a personal AI assistant
The Secret Service spent $50,000 on OpenAI and won’t say why (404 Media).
OpenAI chair says board has discussed equity compensation for CEO Sam Altman (Reuters).
The Fed slashed interest rates last week, but Treasury yields are rising. What’s going on? (CNBC).
X blocks links to hacked JD Vance dossier (The Verge).
Behind OpenAI’s staff churn (The Information).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
AI for Good: Carbon sequestration
Source: Unsplash
Agricultural practices have a significant impact on the environment. In terms of carbon emissions alone, agriculture — in a number of ways — is responsible for around 10% to 12% of global carbon emissions.
Part of the problem here is that widespread tilling breaks up shallow root systems, causing naturally sequestered carbon to re-enter the atmosphere.
What happened: New research published earlier this year points to a potential solution, one that combines a different approach to root systems with machine learning tech.
The key is deeper roots. While plants normally spread their root systems out, the discovery of the DRO1 gene enables scientists to develop crop varieties that grow their roots straight down, better ensuring that naturally captured carbon stays that way.
But measuring it remains complicated. This is where the AI comes into play. A combination of remote sensors and analytical AI models allows scientists to quickly and accurately measure soil carbon at a very large scale.
Why it matters: Improving levels of carbon in the soil is good for crop health, reducing global emissions and increasing sustainability and climate change resilience.
The FTC’s AI crackdown
The United States Federal Trade Commission (FTC) is taking legal action against five companies for deceptive AI-related practices.
The details: The cases come as part of the FTC’s Operation AI Comply sweep. The FTC is taking action against DoNotPay, Ascend ECom, Ecommerce Empire Builders, Rytyr and FBA Machine.
The services that brought on the lawsuits include AI tools designed to publish fake product reviews, schemes promising users that they will make money through AI and robo lawyers that don’t add up to the company’s claims.
Legal action has already begun across all five cases, with a federal court issuing orders to temporarily halt the schemes of four of the five companies, while the fifth — DoNotPay — agreed to a $193,000 settlement.
“Using AI tools to trick, mislead, or defraud people is illegal,” FTC Chair Lina Khan said. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”
Bring on the lawsuits.
We talk a lot about regulation, and what the FTC is doing here is a great form of it — lawmaking will move slow, but federal regulators can move much more quickly. And these movements can help set the stage for a future environment that helps skim some of the scum from the sector here.
As long as we’ve had an internet, we’ve had internet scams. They’re not going away; AI just opened up a new opportunity for a slightly different breed of scam.
Don’t fall for them.
🤔 Your thought process:
Selected Image 1 (Left):
“AI still can’t get sun reflections right in a lot of cases”
Selected Image 2 (Right):
“I thought the airport and runway looked a little unfinished. I guess life is unfinished.”
💭 A poll before you go
Have you had issues with faulty AI text detection? |
Your view on Colorado’s new AI-powered call center:
More than a third of you said it sounds okay, so long as there are proper guardrails. 25% said it sounds great. The rest are concerned about bugs that are common to LLMs and generative AI, including hallucination and bias.
Something else:
“The rapid language translation seems to be a fine addition to emergency response as long as it’s accurate. The wholesale elimination of bystander reporting seems to be a loss of data that could be used.”
Sounds okay, but needs guardrails:
“The concept is great. I hope there are no bugs that may cost life or injury.”
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.