- The Deep View
- Posts
- ⚙️ Report: Mass surveillance and human rights violations in Denmark
⚙️ Report: Mass surveillance and human rights violations in Denmark
Good morning. Nvidia will report earnings after the bell today.
Wedbush analyst Dan Ives — unsurprisingly — expects a “drop the mic” performance from the semiconductor giant. But issues with Nvidia’s newest chip in the data center are hanging over the company; expectations, as per usual with Nvidia, hang out right around the line of being unachievably high.
We’ll have a breakdown for you tomorrow morning.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🌎 AI for Good: NASA’s new Earth copilot
👁️🗨️ Microsoft partner unveils GPU alternative
🇩🇰 Report: Mass surveillance and human rights violations in Denmark
AI for Good: NASA’s new Earth copilot
Source: Microsoft
NASA, through a series of orbiting satellites, keeps a close eye on the Earth. Every day, the space agency adds to its massive repository of geospatial data, data which can then be used by researchers, to, for instance, inform policy decisions and monitor natural habitats, climate change and wildfires.
What happened: But NASA’s geospatial data is both too complex and too sizable for the average person to navigate it. So the agency teamed up with Microsoft to build an Earth copilot.
The generative AI copilot makes the database searchable through natural language queries.
This enables a wider range of people, from climate researchers to high school teachers, to quickly access and engage with NASA’s geospatial data.
The platform is currently being assessed by NASA engineers. Once this phase is completed, the Earth copilot will be publicly released.
Meet your new AI assistant for work
Sana AI is a knowledge assistant that helps you work faster and smarter.
You can use it for everything from analyzing documents and drafting reports to finding information and automating repetitive tasks.
Integrated with your apps, capable of understanding meetings and completing actions in other tools, Sana AI is the most powerful assistant on the market.
Microsoft partner unveils GPU alternative
Source: d-Matrix founders: Sudeep Bhoja — CTO (left); Sid Sheth — President & CEO (right)
There are two sides to the construction and commercialization of generative AI technology: training and inference. Training is the first phase, and it involves feeding datasets to a given model; this process additionally requires a ton of computing power, which is today largely achieved through stacks of expensive Nvidia GPUs.
Inference — which follows training — involves the operation of that model; i.e., asking it to perform predictions or produce output. This stage is enormously expensive, producing the bulk of energy strain (and subsequent carbon emissions) in data centers.
The logic behind that is simple — training is a finite process; inference is not. And with the number of users and the complexity of tasks on display, inference has become something of a problem, with Gartner estimating that nearly half of all existing data centers will be power-constrained by 2027.
What happened: d-Matrix on Tuesday unveiled Corsair, a new computing platform (and GPU alternative) specifically designed to power inference in the data center.
The focus of this new platform is to solve the memory-compute problem; Corsair is built on a Digital-In Memory Compute architecture that, according to the company, “enables 60,000 tokens/second at 1 ms/token for Llama3 8B in a single server and 30,000 tokens/second at 2 ms/token for Llama3 70B in a single rack.”
The company, which has raised around $160 million, has forged strategic partnerships with and secured backing from Microsoft M12 (Microsoft’s venture arm), in addition to several venture capital firms.
“d-Matrix’s compute platform radically changes the ability for enterprises to access infrastructure for AI operations and enable them to incrementally scale out operations without the energy constraints and latency concerns that have held AI back from enterprise adoption,” Michael Stewart, managing partner of M12, said in a statement.
🚀SmallCon: Free Virtual GenAI Conference featuring Meta, Mistral, Salesforce, Harvey AI and More🚀
Learn what it takes to build big with small models from AI trailblazers like Meta, Mistral AI, Salesforce, Harvey AI, Upstage, Convirza, Nubank, Nvidia, and more. Dive into cutting-edge tech talks, live demos, and interactive panels on topics like:
The Future is Small: Why Apple is Betting Big on Small Models
Unlocking Enterprise Transformation with GenAI
Trends in GenAI: What's New in SLM Training and Serving
AI Agents that Work: Lessons Learned
Applied AI and Real World Use Cases
Save your spot for SmallCon to learn how to build the GenAI stack of the future!
The Department of Justice has asked a judge to push Google to sell off its search browser, Chrome, and unbundle Android, Bloomberg reported. This would act as a remedy for Google’s search monopoly.
The U.K.’s Competition and Markets Authority has ended its investigation into Anthropic’s relationship with Google, according to the Irish News, approving the partnership. The CMA has previously investigated and cleared Microsoft’s relationship with Inflection.
OpenAI’s female staff complain of gender disparity after Murati’s exit (The Information).
Pokemon Go players have unwittingly trained AI to navigate the world (404 Media).
SpaceX launches 4th Starship flight of 2024 (Space.com)
Germany suspects sabotage behind severing of critical undersea cables (Semafor).
$100 billion Middle Eastern AI fund plots its US expansion (The Information).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
Manager Solutions Architecture: AWS, New York, NY
Director of Product Management: Warner Bros Discovery, New York, NY
Report: Mass surveillance and human rights violations in Denmark
Source: Created with AI by The Deep View
Amnesty International last week released the results of a lengthy investigation it had undertaken into Denmark’s digitized social welfare system in a report titled: Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State.
The investigation found that UDK, the Danish welfare authority, has — in collaboration with private firms ATP and NNIT — conducted the operation of a vast network of potentially discriminatory fraud detection algorithms, a grim reality that the authority combines with mass surveillance efforts.
Neither the UDK, ATP or NNIT returned requests for comment.
“This mass surveillance has created a social benefits system that risks targeting, rather than supporting the very people it was meant to protect,” Dr. Hellen Mukiri-Smith, Amnesty International’s Researcher on Artificial Intelligence and Human Rights, said in a statement. “The way the Danish automated welfare system operates is eroding individual privacy and undermining human dignity.”
The details: The UDK uses a network of 60 fraud detection algorithms, which are intended to flag potential instances of social benefits fraud. These, like all algorithms, make predictions based on data.
The UDK feeds its algorithms public data regarding Danish citizens — Amnesty International’s research found that the “Danish government has implemented privacy-intrusive legislation that allows for the collection of data from residents in receipt of benefits and their household members, without their consent, for the purposes of surveilling its population to control for fraud.”
This approach is both ‘highly invasive” and of “questionable utility,” given that “discriminatory structures are embedded in the design of UDK/ATP’s algorithmic models and enable the creation and promotion of categorizations based on difference or ‘othering.’”
Amnesty International found that these algorithms, designed to predict which people are likely to commit fraud, risk “dangerously and disproportionately targeting already marginalized groups,” such as low-income groups, migrants, ethnic minorities and refugees. The organization argues that this constitutes social scoring, which would be a violation of the European Union’s AI Act, which bans applications of AI that pose “unacceptable risks.” Social scoring, biometric categorization, deceptive AI and emotion inference have all been prohibited by the law.
The UDK, in response to Amnesty International, told the organization that its collection of public data was “legally grounded” and that its algorithmic approach does not constitute social scoring. The UDK did not provide any details.
Running through all of this, as the organization points out, is a pervasive lack of transparency.
Where to go from here: Amnesty International recommended that the UDK immediately stop using algorithms that constitute social scoring until evidence has been provided to the contrary; it also called on Danish authorities to introduce greater transparency into their algorithmic operations and asked Danish parliament to enact legislation that would establish an oversight system to monitor UKD’s use of algorithms.
You can read the report in full here.
It is important to note that this is neither new nor unique. Skipping over similar instances of algorithmic discrimination in automated governmental decision-making, issues of discrimination related to automated welfare distribution have been going on for years.
An eerily similar algorithm was banned in the Netherlands in 2020 by The Hague; that algorithm has since been re-introduced. And last year, Wired reported on this very situation; as Amnesty International’s report shows, nothing has changed.
Amnesty International has published reports on, again, eerily similar welfare algorithms in the Netherlands, India and Serbia.
We talked about this general idea recently as it pertains to carceral AI; at the time, I suggested that “the main question should be whether AI is needed, not how to use it.”
The most accessibly bleak future offered by artificial intelligence is one of mass surveillance and algorithmic decision-making without proper human oversight. The impacts of that can be far-reaching, from opaquely deciding who gets social welfare benefits, to enabling predictive policing at scale.
I find myself often coming back to this kind of fundamental human question: what is the purpose behind our actions & does automation serve that purpose or pervert it?
Here, the purpose of social welfare is, very simply, to help people. And there remains no evidence that welfare fraud is itself a large enough reality to justify such an undertaking to prevent it — researchers have called the idea of widespread welfare fraud a myth; a 2021 report found that in France, welfare fraud amounted to 0.39% of all benefits paid. Data regarding welfare fraud in Denmark remains unclear, but in the U.S., fraud made up less than 1% of food stamps benefits in 2018 and less than 2% of unemployment benefits in 2013.
The bigger problem underlying all of this is that such efforts have been ongoing for at least a decade around the world. Even as they get dragged back to the surface, harm has already occurred, and governments appear uninterested in halting their algorithmic operations.
Which image is real? |
🤔 Your thought process:
Selected Image 1 (Left):
“I don’t think AI would have been allowed to create something as bland as #1. It’s interesting, but not as much pizzazz as #2.”
Selected Image 2 (Right):
“Good job!”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on California’s deepfake election law:
30% of you said the law is needed; 23% said it might be a free speech violation and 16% said Musk just likes suing people.
Needed:
“The AI cat is out of the bag … there should be a guardrail of some kind — yes, you should be allowed to make silly content, (freedom of expression) but it should be marked as such. Especially if it contains language like ‘I'm ___ and I endorse this message.’ It has the same effect psychologically as shouting falsely that there's a fire … which is NOT protected under the 1st Amendment.”
Free speech violation:
“Politicizing AI is what this law is all about, given it specifically is targeting elections. And Newsom just loves attention. Democracy is a slow-moving process, and for good reason. Whenever it tries to move quickly and broadly, it causes more damage than it prevents. Free speech is the fundamental reason America is the nation the world looks up to. Democracy and capitalism depend on it. It needs to be protected at all costs. Any limits put on it must be challenged vigorously, as if our lives depend on it. Because they do.”
What do you think will happen with Nvidia post-earnings? |