- The Deep View
- Posts
- ⚙️ The public health crisis of AI
⚙️ The public health crisis of AI
Good morning. SpaceX, just one of several Elon Musk-owned companies, recently achieved a $350 billion valuation.
Separately, driven by a Tesla stock surge, Musk’s personal wealth is now right around the $450 billion mark, making him the first person to surpass $400 billion. The world’s second-richest man, Jeff Bezos, has a mere $249 billion to his name.
1% of 1% of Musk’s fortune would be $45 million … in case he’s interested in sharing the wealth…
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🌎 AI for Good: Advanced climate modeling
💻 Character AI ships safety updates following lawsuits
📊 The push for more - and more open - data
🚨 The public health crisis of AI
AI for Good: Advanced climate modeling
Source: Unsplash
A group of climate scientists at Stanford University and Colorado State University recently conducted a new study that aims to determine the pace of global warming based on different carbon emissions scenarios.
The details: The researchers trained and leveraged convolutional neural networks (CNNs) to refine their projections.
The AI model was trained on a vast quantity of historical temperature and greenhouse gas emissions data; upon inputting actual historical temperature data — in addition to the other components of proposed scenarios — the model was capable of predicting future global temperatures.
“AI is emerging as an incredibly powerful tool for reducing uncertainty in future projections,” Elizabeth Barnes, a professor of atmospheric science at Colorado State and one of the paper’s authors, said in a statement. “It learns from the many climate model simulations that already exist, but its predictions are then further refined by real-world observations.”
The researchers found that there is a 50% chance that global warming will exceed 2 degrees celsius, even if we reach net-zero emissions by the 2050s, one of the more optimistic scenarios.
Why it matters: The results here, according to the researchers, suggest that the world can no longer focus exclusively on decarbonization; we must also invest in measures that will enable us to adapt to a hotter, more extreme climate.
Local AI eliminates the need for expensive cloud infrastructure by processing data directly on devices like smartphones, laptops, or servers. This approach allows organizations to maintain total control of their data, enhancing security and boosting performance.
But beyond that, local AI solutions also improve business outcomes. From operational efficiency to cost reduction, from security to support to sales, local AI outperforms cloud AI across the board.
Want to learn more about why 300 industry leaders report smoother adoption and higher satisfaction with local AI than cloud AI?
Character AI ships safety updates following lawsuits
Source: Character AI
In the wake of a second lawsuit that painstakingly links Character.AI to radical, violent personality changes and mental health degradation in young users, the AI companionship app on Thursday announced a few safety-focused updates.
The details: Character said that it spent the past month developing a new model specifically for teenage users (the first lawsuit was filed roughly six weeks ago).
Character said the goal is to “guide the model away from certain responses … reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.” The result is two models — and two different experiences — on the platform.
Character said it is also taking steps to identify and intervene against sensitive inputs in addition to outputs. Character said it will soon roll out parental controls and a time-spent notification, which will pop up after an hour spent on the app (according to Sensor Tower, the average user spent 98 minutes per day on the app this year).
The developer also changed its disclaimer, both for normal characters, and for those that play the role of medical professionals (the second lawsuit accused Character of the unlicensed practice of psychotherapy).
A screenshot of a character I pulled up.
In a quick test — as a 16-year-old user — I came across the expanded disclaimers and noticeably less suggestive characters than last time. The characters were much more inclined to admit that they were artificial. However, my sensitive inputs once again did not set off any interventions.
Shares of ServiceTitan spiked more than 40% on Thursday during its debut on the Nasdaq.
OpenAI rolled out “advanced video mode” to ChatGPT on Thursday, an integration that will allow ChatGPT to respond in near real-time to video input.
Microsoft anchors $9B renewable energy coalition (TechCrunch).
Trump extends inauguration invite to Xi, signaling potential thaw in US-China relations (Semafor).
Malaysia tightens grip on internet, in blow to online freedom (Rest of World).
Adobe shares plunge 13% on disappointing revenue guidance (CNBC).
How Anthropic got inside OpenAI’s head (The Information).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
Hire Your Next DevOps Engineer for 70% Less!
Secure elite DevOps from Latin America to scale your team.
Save 70% on salaries with Athyna’s global talent platform.
Enjoy a $1,000 discount exclusively for readers of The Deep View.
The push for more - and more open - data
Source: Microsoft
In the field of artificial intelligence, data is king. But data has lately been causing problems ranging from copyright infringement lawsuits — which for some developers, has necessitated costly licensing arrangements — to the cost and complexity associated with acquiring, cleaning and labeling it.
The result is yet another element of AI development that is largely in reach for massive corporations, rather than academics and smaller players.
What happened: Harvard University, in collaboration with a number of Big Tech players, including Microsoft, Google and OpenAI, on Thursday announced the launch of a new, openly accessible Institutional Data Initiative that aims to change this.
The details: The dataset includes around one million public-domain books, including those by Shakespeare, Dickens and Dante, which are no longer copyright-protected.
It’s not clear when or how the dataset will be released, but Harvard said that it is working with Google to “release this treasure trove far and wide.” The long-term idea is to have “no data left behind,” something they plan to achieve by scraping institutional knowledge, which would make that information more accessible, while also making AI development more accessible.
OpenAI, which is embroiled in a number of copyright-related lawsuits, said in a statement that it is “delighted” to support the project.
OpenAI has said that “it would be impossible to train today's leading AI models without using copyrighted materials.”
The public health crisis of AI
Source: Unsplash
As much as artificial intelligence is presented as a solution to a number of current problems and threats, from healthcare to sustainability, the technology today largely isn’t helping. It’s actually making things worse.
Before we get into it, the term “AI” has become a rather wide umbrella; the kind of algorithmic applications that are helping scientists with these problems — the kind of AI we talk about in our AI for Good section — are largely smaller and highly specific applications. Here, think computer vision models trained on just a few thousand photos of plastic in the ocean, or cancerous tumors. Often, these applications refer more to machine learning — and sometimes even more basic algorithms — rather than the Large Language Models (LLMs) that make up the heart of generative AI.
The size of these models is at the core of their energy intensity, which, in turn, drives the unsustainability of LLMs.
What happened: A recent study conducted by researchers at UC Riverside and Caltech found that the rampant demand for AI (here, we’re largely talking about LLMs and generative AI) is fueling a steady, dramatic rise in “deadly” air pollution.
The details: The study, which has not been peer-reviewed, acknowledges the environmental impact of AI-optimized data centers — something that has been gaining attention in recent months — but goes a step forward, arguing that “the AI hardware manufacturing process, electricity generation from fossil fuels to power AI data centers and the maintenance and usage of diesel backup generators to ensure continuous AI data center operation all produce significant amounts of criteria air pollutants.”
Exposure to these pollutants, especially for vulnerable people, is linked to a variety of health problems, from cardiovascular disease to lung cancer. Indeed, according to the World Health Organization, outdoor air pollution caused the premature deaths of 4.2 million people in 2019.
A 2023 study found that, between 1999 and 2020, around 460,000 deaths in the U.S. were attributable specifically to air pollution from coal-burning energy plants. And right now, the power demands of AI are such that older coal plants, which were scheduled for retirement, are remaining in operation.
The findings: Using a statistical modeling tool provided by the Environmental Protection Agency, the researchers projected that, in 2030 alone, U.S. data centers could contribute to approximately 1,300 premature deaths and 600,000 asthma symptom cases. Overall, they wrote that the public health costs could exceed $20 billion.
Depending on the location — and, therefore, the cleanliness of the grid — the researchers said that training an AI model roughly the size of Llama 3.1 can today produce a quantity of air pollutants that is equivalent to “driving a car for more than 10,000 round trips between Los Angeles and New York City.”
In 2023, pollution from U.S. data centers caused a total public health cost of around $5.6 billion, according to the study.
The real world: The reality described by the study isn’t abstract; it has played out, perhaps most prominently, in Memphis, Tennessee, the site of Elon Musk’s famed 100,000(+) GPU cluster.
In August, the Southern Environmental Law Center wrote in a letter to the EPA that the xAI facility had installed some 18 gas turbines, with more on the way, to feed its voracious power requirements. The result, at the time, was an increase in air pollution in a city already beleaguered by poor air quality.
The SELC reiterated in November that these turbines — which have increased in number — are one, operating without a permit, and two, spewing a mixture of dangerous chemicals including formaldehyde.
The researchers recommended the adoption of standardized methods for the reporting of air pollution caused by electricity generation and AI model training. They also recommended that communities impacted by this pollution be compensated by the tech companies causing it.
Google told me just this week that it believes “AI can be a big contributor to alleviating the climate crisis with: better models for prediction and monitoring, optimizing existing infrastructure and accelerating scientific breakthroughs.”
It is an impression shared by its Big Tech peers.
And it is an impression that overlooks the breadth and severity of the current and near-term impacts of investing so intensely in this technology.
Which image is real? |
🤔 Your thought process:
Selected Image 2 (Left):
“The eggs in image #2 are more accurate for chickens. Everything looked really good in image #1, except for the eggs did not look like chicken eggs.”
Selected Image 1 (Right):
“The eggs in image 2 look like they have just come out of a box not a chicken.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on the FTC and Big Tech:
More than half of you want the FTC to continue going after Big Tech next year. Only a quarter of you don’t.
Yes!:
“Someone with teeth needs to. If the FTC is going to make informed rules/ guidelines for Big Tech to ensure our info is secure, they need to also have the ability to enforce these rules, and with significant enough consequences. (Not a slap on the wrist.)”
Nope:
“It needs to be monitored but not controlled like the FDA. It would thwart innovation — people need to exercise common sense and look after their children.”
Do you live near a data center? Do you notice air pollution related to it? |