- The Deep View
- Posts
- ⚙️ OpenAI launches $2 million fund to fight deepfakes
⚙️ OpenAI launches $2 million fund to fight deepfakes
Good morning. Today, I’ve got my eye on The Knicks! a positive downpour of funding announcements and debuts in the AI sector that run the gamut from autonomous driving to AI-powered on-device apps. Shares of Palantir, meanwhile, whose Q1 earnings we reported on yesterday, spent Tuesday in full retreat despite a strong report.
Let me know on Twitter if you think this marks the end of the Palantir surge.
In today’s newsletter:
🗳️Poll: U.S. voters say publicly available data doesn’t make scraping okay
🤝 OpenAI inks a licensing deal with Dotdash Meredith
📱Report: Apple is working on AI chips (but not in the Nvidia way)
📽️ OpenAI is focusing in on content provenance with C2PA
Created with AI by The Deep View.
Poll: U.S. Voters say AI companies shouldn’t scrape
A new poll conducted by the Artificial Intelligence Policy Institute (AIPI) – and shared with the Deep View – found that 60% of respondents believe AI companies should not be allowed to train their models on a corpus of public data scraped from the nether regions of the internet. Nearly 75% of those polled said that AI companies ought to be “required to compensate the creators” of the data they use.
And about 80% said they’d like to see regulatory actions that cement those stances into law.
This, of course, is the debate at the core of the business of generative AI, with the corporations largely positing that it is “fair use” to scrape available data, while creators have largely taken the opposite stance. The U.S. Copyright Office has decided that right now, there’s just no reason to pick sides; while a number of lawsuits have been filed against AI companies, none have progressed to that fun, definitive stage of rulings.
And even as these concerns abound, AI companies have been inking licensing deals with media groups, something that some creatives have said proves that even the companies themselves think their “fair use” claim is weak.
Daniel Colson, the executive director of the AIPI, told me that while these deals are a decent start, they can easily leave smaller creators “out in the cold.” Even with licensing deals, he said, compensation remains uneven, something that needs to be legally addressed.
“Currently, the benefits of artificial intelligence are supported by the work and words of billions, and those same people are at risk if things go off the rails,” he said. “The bottom line is the public that helped build the foundation for AI deserves a say in how the technology is developed.”
Do you think AI companies should be allowed to train on publicly available data without crediting/compensating creators? |
OpenAI signs another major media deal
Just a day after signing a licensing deal with Stack Overflow, OpenAI on Tuesday signed a licensing agreement with Dotdash Meredith, the media organization behind People, Serious Eats and Investopedia (plus a bunch of other brands you’ve probably read or watched at some point).
Beyond using its content to train OpenAI’s models, OpenAI will also surface information (with links) from Dotdash’s brands in relevant ChatGPT queries. As part of the deal, Dotdash will leverage OpenAI’s models to enhance its AI-powered advertising tool.
Created with AI by The Deep View.
Like the other deals OpenAI has made — with the FT and Axel Springer —we don’t know how many zeroes Dotdash Meredith’s new check has (or how often that payment will come in). These agreements come as other publications (in a movement spearheaded by the New York Times) have opted instead for lawsuits.
The move, which Dotdash CEO Neil Vogel said would “ensure a healthy internet for the future,” came alongside an announcement from OpenAI that it is building a Media Manager tool, which will allow creators to identify and remove copyrighted work that OpenAI has scraped.
Our approach to content and data in the age of AI: openai.com/index/approach…
— OpenAI (@OpenAI)
3:04 PM • May 7, 2024
Apple is working on AI chips
Apple – which has been slow to partake in the same AI race that’s been dominating the rest of its Big Tech peers – is looking to extend its chip dominance into the age of artificial intelligence. The Wall Street Journal reported Tuesday that the company is developing a chip meant to run AI software in datacenter servers, a project that has been codenamed “ACDC,” or “Apple Chips in Data Center.”
Apple didn’t respond to a request for comment.
According to the report, the chip has been in development for years, though it remains unclear if or when it will be officially unveiled. Apple’s been working with chip-making giant Taiwan Semiconductor Manufacturing (TSMC) to design and produce the chip, which will be focused on running AI applications, not training AI models (something Nvidia just excels at).
Daniel Newman, CEO of tech research firm the Futurum Group, said the move makes “perfect sense” for Apple’s strategy of coming AI integration.
Apple CEO Tim Cook said in February that the company is “investing significantly” in generative AI.
TOGETHER WITH Sana
Work faster and smarter with Sana AI
Meet your new AI assistant for work.
On hand to answer all your questions, summarize your meetings, and support you with tasks big and small.
Try Sana AI for free today.
💰Jobs in AI:
Director, Generative AI Platform: Capital One · United States · Remote eligible (Apply here)
Product Manager II, Gen AI Model Quality: Google · United States · (Apply here)
AI/Machine learning engineer manager/consultant: EY · United States · Remote (Apply here)
Senior Python Engineer: Hello Fresh · Netherlands · Amsterdam (Apply here)
📊 Funding & New Arrivals:
Wayve – a British AI company developing Embodied AI for autonomous driving – on Tuesday announced a $1.05 billion Series C funding round, with new contributions from Nvidia.
Espresso AI emerged from stealth Tuesday with an $11 million funding round. The company’s tech optimizes code to reduce the cost of cloud computing.
Daloopa, an AI-powered data company, on Tuesday closed an $18 million Series B funding round, led by Touring Capital with participation from Morgan Stanley.
Conn.ai came out of stealth Tuesday with a product called Asta, designed to automate peoples’ daily workflow in one AI-powered ecosystem.
Here’s a taste of what it’s like travelling in one of the @wayve_ai “hands-free” cars around the crazy roads behind Kings Cross (with CEO @alexgkendall )
Link here:
British driverless car start-up raises $1bn
thetimes.co.uk/article/347140…
— Katie Prescott (@kprescott)
6:50 AM • May 7, 2024
❓️Misc:
TOGETHER WITH OMNIPILOT
Omnipilot is a Mac app that brings the power of AI to every application on your computer.
It uses the current context of the app you're working in to provide intelligent suggestions and assistance, just a shortcut away.
Whether you're writing documents, composing emails, or looking for quick answers to questions, Omnipilot is the fastest way to use AI in your work.
OpenAI is focusing in on content provenance
The ongoing rise of generative AI has made deepfakes a hell of a lot more common than they’ve ever been before.
The impact of this social-media-fueled environment where truth is now even more difficult to ascertain (and even easier to spread) is already being felt. Issues of deepfake pornographic abuse have plagued everyone from high school girls to Taylor Swift and other public figures; AI-generated misinformation – especially dangerous in light of all the elections taking place this year – has already abounded in the form of images, videos and synthetic phone calls.
The companies behind these AI generators have acknowledged the issue of deepfake abuse and misinformation in a few different ways this year; most notably through the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” announced in February. But guardrails remain at least partially ineffectual and watermarking – without total participation of social media platforms – likely will not be enough to shield people from the reality of this new world of digital lookalikes.
OpenAI on Tuesday announced that it is joining the Steering Committee for the Coalition for Content Provenance and Authenticity (C2PA), an organization that has begun to popularize the use of C2PA metadata, which proves the provenance of a piece of media.
Acknowledging that this metadata can be easily stripped from a piece of content, OpenAI added that it is also developing a new, tamper-resistant method of watermarking — which will be applied eventually to its audio and video products — as well as a tool designed to distinguish between AI-generated and non-AI-generated images.
At the same time, OpenAI and Microsoft launched a $2 million “societal resilience” fund, meant to increase AI education and literacy among voters and vulnerable communities. This fund comes in the form of grants going to certain organizations, including C2PA and Older Adults Technology Services from AARP.
Introducing a new classifier to help researchers identify content created by DALL·E 3.
We are also joining the Coalition for Content Provenance and Authenticity (@C2PA_org) Steering Committee to promote industry standards.
— OpenAI (@OpenAI)
2:05 PM • May 7, 2024
My take:
This is definitely a good step. I like seeing OpenAI and Microsoft taking a bit of a more action-oriented approach to the idea of “responsible AI.”
And I agree with OpenAI’s statement that the future will be one of rampant watermarks – my concern is cross-platform consistency. These efforts to ensure content provenance won’t get very far if the social media platforms don’t get on board. The world I’d love to see (hopefully before November) is one in which Twitter, LinkedIn, Meta, YouTube etc. have detection tools running 24/7 in the engine room of their platforms; every social media post should have its provenance (or the likelihood of its provenance) flagged clearly and visibly.
But, one step at a time, I suppose.
Image 1
Which image is real? |
Image 2
Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter).
*Indicates a sponsored link
SPONSOR THIS NEWSLETTER
The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft, and many more.
One last thing👇
A conversation with OpenAI on what to expect in the next 12 months
- today's systems are laughably bad
- ChatGPT not long term engagement model
- models capable of complex "work"
- like a great team mate working with u
- shift towards verbal interfaces & beyond, Multimodality— ʟᴇɢɪᴛ (@legit_rumors)
3:11 AM • May 7, 2024
That's a wrap for now! We hope you enjoyed today’s newsletter :)
What did you think of today's email? |
We appreciate your continued support! We'll catch you in the next edition 👋
-Ian Krietzberg, Editor-in-Chief, The Deep View