- The Deep View
- Posts
- ⚙️ Cybersecurity researcher says Microsoft should ‘recall Recall’ before it’s too late
⚙️ Cybersecurity researcher says Microsoft should ‘recall Recall’ before it’s too late
Good morning. Really interesting results from yesterday’s poll about AI clones — “this is a terrible idea” got the most votes, but not by much.
Your thoughts ranged from “this is an obvious evolution” to it feels like it could “become a Black Mirror episode.” Check out the results below.
Results from the poll: ‘How would you feel about a world of decision-making AI clones of yourself?’
In today’s newsletter:
🤗 AI For Good: NOAA is fighting riptides with AI
📄 Research: AI performance in medical imaging is ‘worse than random’
📚 Study: GenZ isn’t so big into AI
🛜 Cybersecurity researcher says Microsoft should ‘recall Recall’ before it’s too late
AI For Good: NOAA is fighting riptides with AI
Image source: NOAA
Around two years ago, the National Oceanic and Atmospheric Administration (NOAA) released an experimental coastal prediction tool. The tool — powered by AI — issues rip current forecasts for beaches all over the country.
Details: The model “can predict the hourly probability of rip currents along U.S. beaches up to six days out.”
The model uses data about wave and water levels to predict the likelihood of dangerous currents.
A new approach: “Rip currents account for an estimated 100 deaths in the United States each year,” Gregory Dusek, a NOAA scientist who developed the model, said.
“Before this, forecasters were manually predicting rip currents on a large section of the ocean twice a day and only a day or two into the future. The earlier prediction has (the) potential to substantially increase awareness and reduce drownings.”
Research: AI performance in medical imaging is ‘worse than random’
Image Source: National Cancer Institute
A big issue in the AI industry has to do with poor methods of evaluation, which helps developers over-sell models and can convince users to deploy models in situations where they maybe shouldn’t.
Researchers at the University of California, Santa Cruz, are seeking to introduce more robust methods of evaluation specifically for large multimodal models (LMMs). A recent paper (that hasn’t yet been peer-reviewed) evaluated a series of LMMs in the field of medical visual question answering. And the models didn’t hold up.
You can read the full paper here.
Key findings: The study found that top models — such as Gemini Pro — “perform worse than random guessing” on specialized diagnostic questions, which highlights “significant imitations” of the LMM architecture.
The researchers tested models by introducing hallucination pairs to double-check the model’s ability to answer a question. An example of this is shown in the image below.
The seven models tested experienced between a 10% and 78% reduction in accuracy when tested more intensely.
An example of the adversarial testing employed by the researchers (Figure 2).
“These findings underscore the urgent need for robust evaluation methodologies to ensure the reliability and accuracy of LMMs in medical imaging,” according to the report.
Together with Superhuman
Superhuman — the fastest email experience ever made — now has AI built-in.
With Superhuman's new Ask AI feature, you can immediately get answers from your inbox.
Stop searching, start asking with Superhuman AI.
Sign up for free beta access through our partnership.
Study: GenZ isn’t so big into AI
Photo by Kelly Sikkema (Unsplash).
A recent study — conducted by Hopelab, Common Sense Media and The Center for Digital Thriving at Harvard Graduate School of Education — of the younger generation’s impression of generative AI found that the kids aren’t wholly sold on the tech.
The survey of 1,274 young people (ages 14-22) was conducted in November 2023.
Key findings: Half of those surveyed have used generative AI at some point in their lives
But only 4% are daily users.
41% have never used AI tools & 8% don’t know what AI tools are.
A third of those who stay away from AI just don’t believe it will be helpful for them. But 23% don’t know how to use them and 22% are concerned about issues with privacy.
The bulk of kids who do use genAI use it to get information, brainstorm ideas and for homework help.
"As generative AI continues to evolve, it is essential that the voices and experiences of young people are included in developing these tools and how they will transform many aspects of our society," Amy Green, Head of Research at Hopelab, said.
💰AI Jobs Board:
Generative AI Engineer: Ascendion · United States · New Jersey; Hybrid · Full-time · (Apply here)
Data Scientist: GitAI · United States · Torrance, CA · Full-time · (Apply here)
AI Researcher: Peraton · United States · Fort Meade, MD · Full-time · (Apply here)
📊 Funding & New Arrivals:
Swiss startup EthonAI — focusing on making sense of manufacturing data — raised $16.5 million in Series A funding.
WeaveBio, an AI-powered life sciences platform, announced $10 million in seed funding.
Enterprise AI company Cloudera is acquiring self-described “generative AI workbench” Verta.
🌎 The Broad View:
Deepfakes haven’t been as much of a problem in India’s election as expected (Rest of World).
How the quest for Chinese characters led to the early invention of autocomplete (MIT Technology Review).
Google leak reveals thousands of privacy incidents (404 Media).
*Indicates a sponsored link
Together with Notta Showcase
Go Global With AI Video Translation
In today’s digital era, your audience is global. With Notta Showcase AI video translator, you can effortlessly convert videos into 15+ languages with natural-sounding dubbing, making sure your message reaches & resonates with viewers worldwide.
Unlock the full potential of your videos
Advanced voice cloning. Preserve the original voice style while translating content to ensure a natural listening experience.
Break down language barriers. Perfect for businesses, educators and content creators aiming for a global impact.
Easy to get started. You provide the video, specify the languages and let our AI handle the rest.
Don't miss out on the opportunity to engage with a global audience. Get started for free today.
Deep View readers can enjoy Showcase Pro with a 10% discount using the code 'DV10OFF' at checkout, valid until June 13th.
Cybersecurity researcher says Microsoft should ‘recall Recall’ before it’s too late
Photo by Bram Van Oost (Unsplash).
Last month, Microsoft unveiled its latest AI-enabled product: Copilot+ PCs, a new family of laptops that have AI tech humming under the hood. One of the main new features of these AI devices is something Microsoft calls “Recall.”
Computers with Recall take screenshots of users’ activity, then use AI to make all that data searchable.
The computers will become available on June 18.
A privacy nightmare: People and researchers alike recoiled at the idea of Recall, despite Microsoft’s assurances that all data is stored locally & that hackers would need to take physical possession of the computer to access the data.
Recall Recall: Cybersecurity researcher Kevin Beaumont was able to simulate & test the feature using this AmperageKit; he discovered that Recall “stores everything locally in an unencrypted SQLite database, and the screenshots are simply saved in a folder on your PC.”
He found gaps in Recall’s security you could “drive a plane through.”
You can read his full breakdown here.
What does this mean? Even if the data is encrypted at rest, once you’re computer is running, it gets decrypted. Beaumont said that simple InfoStealer trojans — which are currently used by cybercriminals to steal passwords from computers — can easily be adapted to access Recall.
“Recall enables threat actors to automate scraping everything you’ve ever looked at within seconds,” he said.
Note: Recall is enabled by default. After set-up, you can enter settings and disable it.
Beaumont said Microsoft should “recall Recall” and rework the feature until it operates as securely as it should.
Considering vulnerable populations that might be less aware of cybersecurity, Beaumont said that “there’s no way this implementation doesn’t end in tears.”
My thoughts: Every instance I witness of major companies seeming to choose AI integrations over data privacy/security — in some effort to further boost their trillion-dollar market caps — convinces me that these big tech incumbents are vulnerable to a new tech industry where privacy always comes first.
What we’re seeing now is a push for almost purposeless AI; I think the next big iteration will involve applications of AI designed to accomplish the things consumers actually want. I think we’ll see the commoditization of privacy in a move out of our current phase, which centers around the commoditization of data.
Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter).
*Indicates a sponsored link
SPONSOR THIS NEWSLETTER
The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more.
One last thing👇
One immensely positive thing about AI and AI research is how it shows that each of us have this amazingly complex thing in our heads that has such capacity for learning and doing. It's so amazing that we still don't fully understand how it works.
— Greg Bildson🐀 (@gbildson)
1:37 PM • Jun 1, 2024
That's a wrap for now! We hope you enjoyed today’s newsletter :)
What did you think of today's email? |
We appreciate your continued support! We'll catch you in the next edition 👋
-Ian Krietzberg, Editor-in-Chief, The Deep View