- The Deep View
- Posts
- ⚙️ Exclusive Poll: Washington’s inner conflict between AI innovation and security
⚙️ Exclusive Poll: Washington’s inner conflict between AI innovation and security
Good morning. It’s hard to look away from the ScarJo V. OpenAI saga that’s been playing out, but we’ve got a lot to talk about today.
Don’t worry, it’s not all bad. It’s just a little nuanced, which is incidentally my favorite word to use when talking about AI.
Today’s edition is all about safety.
In today’s newsletter:
🌎 The EU AI Act gets its official green light
💻 Microsoft combines AI + PCs in ‘privacy-invading product’
🇰🇷 Big Tech giants make safety agreement at Seoul Safety Summit
🏛️ Exclusive Poll: Washington’s conflict between AI innovation and security
The EU AI Act gets its official green light
Image Source: Christian Lue, Unsplash
In March, the world’s first piece of AI legislation – the European Union’s AI Act – was approved. On Tuesday, it was officially greenlit by the EU Council and will gradually begin entering into force.
Keep in mind: The AI Act takes a risk-based approach, meaning that companies working to develop models classified as ‘high-risk’ will face stricter regulatory and transparency requirements.
Most generative AI models will be classified as ‘general purpose’ rather than ‘high-risk,’ and will be subject to transparency requirements, among other things.
What happens next: The Act will soon be published in the EU’s official journal; 20 days later, it will begin entering into force.
6 months after publication in the journal, the ban on prohibited systems will take effect.
1 year after publication, rules regarding general-purpose system governance will take effect.
2 years after publication, the entirety of the Act will take effect.
Microsoft combines AI + PCs in ‘privacy-invading product’
Image source: Microsoft
If there is one truth that surrounds the current moment in AI, it is that just about every tech company is rushing — not to perfect the technology — but to integrate it into everything they can imagine. And next up is the simple, old-fashioned, not-smart-enough laptop.
The details: During its Build conference, Microsoft announced a whole new category of laptops designed to run AI: Copilot+ PCs. A key feature of these new PCs is something Microsoft is calling “Recall.”
Recall, according to CEO Satya Nadella, acts as a “photographic memory … for all your history.”
Computers running Recall constantly take screenshots of users’ activity, then use an on-device AI model to make that data searchable.
Privacy concerns: Microsoft said this information will be stored locally & will not be used to train its AI models. Users can also opt out of sharing certain types of information.
Tech researcher Molly White, however, said that storing information locally “is not a panacea for these alarming privacy-invading products!” She said that it remains unclear what exactly is being stored locally, and whether that data is being backed up on company servers in any capacity.
There is also the risk of shared or stolen devices & the chance that Microsoft might later change its mind about local storage.
Now I know what you’re thinking but Nadella says it’s only locally stored so it’s only a risk if you lose the device, people access it without you knowing, your boss has remote access to it, Windows changes unexpectedly, you sell the device, something you’ve installed locally get
— mike cook (@mtrc)
10:38 AM • May 21, 2024
Microsoft did not respond to a request for more information on these questions.
Together with MindStudio
Optimize workflows with custom AI applications
Join MindStudio’s free two-hour course to discover how to empower your employees with personalized AI applications.
And don't worry — you won't need to learn how to code.
This webinar will explain how to:
Identify processes and tasks in your organization that AI can automate
Build custom AI applications for any job function and securely manage usage
Use your organization’s data sources to tailor AIs to employees’ needs
Integrate AIs within existing platforms like Slack, email, and CRM
If you want to dramatically increase your productivity through a more specialized use of AI, sign up today.
Big Tech giants make safety agreement at Seoul Safety Summit
Image Source: Yohan Cho, Unsplash
A list of 16 prominent AI players — including OpenAI, Microsoft, xAI, IBM, Google and Meta — made an AI safety commitment at the Seoul Safety Summit Tuesday.
Key Points: The companies in question have agreed not to develop or deploy AI models if the risks cannot be “sufficiently mitigated.” The agreement also calls for the publication of individual safety frameworks that will detail how each company will handle a situation where the risks become “intolerable.”
You can read OpenAI’s safety update from the summit here.
Dr. Noah Giansiracusa, a math sciences professor at Bentley University, told me that international agreements to research AI risks are “important” and “helpful.” But the company commitments, less so.
“They still can't make social media safe so why would we believe they're going to make AI safe,” he said. “OpenAI doesn't even follow its own commitment to being open. These are not trustworthy organizations, I'm sorry to say.”
💰AI Jobs Board:
Senior Technology Specialist: Microsoft · United States · Hybrid; Multiple locations · Full-time · (Apply here)
Language Engineer, Artificial General Intelligence: Amazon · United States · Boston · Full-time · (Apply here)
Manager, AI Advocacy: IBM · United States · Hybrid; New York, NY · Full-time · (Apply here)
📊 Funding & New Arrivals:
Scale AI raises $1 billion in a round led by Nvidia & Amazon. It’s now valued at around $14 billion.
Music generator Suno raised $125 million at a $500 million valuation (we still don’t know Suno’s training data).
CoreWeave, a cloud provider, has achieved a $19 billion valuation. The company recently raised $7.5 billion in debt from Blackstone.
🌎 The Broad View:
The case for Apple News (Semafor).
One of Pakistan’s largest thrift platforms is leveraging AI (Rest of World).
Wall Street is getting pumped for Nvidia’s earnings (CNBC).
*Indicates a sponsored link
Together with Superhuman
Superhuman was already the fastest email experience in the world, designed to help customers fly through their inbox twice as fast as before.
We’ve taken it even further with Superhuman AI — an inbox that automatically summarizes email conversations, proactively fixes spelling errors, translates email into different languages and even automatically pre-drafts replies for you, all in your voice and tone.
Superhuman customers save 4 hours every week, and with Superhuman AI, they get an additional hour back to focus on what matters to them.
Exclusive Poll: Washington’s inner conflict between AI innovation and security
Image Source: Harold Mendoza, Unsplash
If there is another truth that surrounds this moment in AI, it is simple: This story is more about power than it is about technical advancements. For years now, Big Tech has been unilaterally making decisions on behalf of the world, that impact most of the world. And with regulation moving slowly (and fighting a rising tide of Big Tech lobbying efforts), the voice of the people being impacted is rather conspicuously quiet.
This disparity is why Daniel Colson started the Artificial Intelligence Policy Institute (AIPI), a think tank whose purpose is to get people’s thoughts on AI to U.S. lawmakers.
The latest from AIPI: In a new survey shared exclusively with The Deep View, the AIPI found that 66% of U.S. voters believe AI policy should prioritize keeping the tech out of the hands of bad actors, rather than providing the benefits of AI to all.
The vast majority of respondents believe AI progress ought to move slower & 70% think AI will destroy more jobs than it will create.
More than 70% of respondents don’t trust tech executives to self-regulate.
More than 60% of respondents support export restrictions on powerful, U.S.-built AI models & restrictions on China’s access to U.S. cloud compute.
The online poll of 1,053 people was conducted on May 3, 2024.
More than 50% of respondents are concerned about recent advancements in AI and wish progress would just slow down.
Washington’s inner battle: Colson told me that policymakers he’s spoken with are grappling with issues of prioritization. On the one hand, you have the severe national security concerns posed by AI, and on the other, you have a desire to unlock American innovation and “maintain U.S. leadership and competitiveness” in AI.
But he said that the AIPI’s polling has made clear that the American public is laser-focused on security, safeguards and risk mitigation in AI.
“In many ways, pitting innovation against security sets up a false dichotomy,” he said.
Colson said that the AIPI’s data shows that a failure to prevent AI models from being exploited by hostile actors will be met by the American people with “frustration.” An international incident could “deeply undermine public trust and support” for AI tech “far more than sensible regulations ever would.”
Colson has found that “policymakers have been particularly moved” by this, considering the fact that it aligns American competitiveness (the Enforce Act, for instance) with safety considerations.
Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter).
*Indicates a sponsored link
SPONSOR THIS NEWSLETTER
The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more.
One last thing👇
One of the coolest videos from the meteor yesterday in Portugal
— Neutralious (@Neutralious)
4:51 PM • May 19, 2024
That's a wrap for now! We hope you enjoyed today’s newsletter :)
What did you think of today's email? |
We appreciate your continued support! We'll catch you in the next edition 👋
-Ian Krietzberg, Editor-in-Chief, The Deep View