- The Deep View
- Posts
- ⚙️ Google emissions are spiking due to increased energy demands of AI
⚙️ Google emissions are spiking due to increased energy demands of AI
Good morning. We’re off tomorrow for July 4th — enjoy some fireworks, grill some burgers and take a day to unplug.
We’ll be back to wrap up the week on Friday.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
AI for Good: Revolutionizing the diagnosis of macular degeneration
Source: Unsplash
Age-related macular degeneration (AMD) represents a leading cause of vision loss that is expected to impact nearly 300 million people by 2040.
And while it is true that your eyes tend to change as you get older, loss of vision is not inevitable, especially if certain measures are taken early. Machine learning and AI can be used to indicate if those measures are needed.
The details: Researchers have trained deep learning algorithms to auto-detect certain biomarkers that indicate the presence of AMD.
The FDA recently approved a tool — iPredit-AMD — which conducts eye health screenings in seconds. The system can also predict an individual risk score for late AMD within one to two years.
Researchers have also trained algorithms that have been able to predict the burden of treatment required for individual patients.
Why it matters: While macular degeneration has no cure, there are a variety of treatments — from injections to gene therapy and eye drops — that can prevent vision loss from worsening. The implementation of AI as a tool for rapid, scalable eye health screening and accurate categorization could help people find doctors and receive tailored treatment before it’s too late.
The only AI Crash Course you need to master 20+ AI tools, multiple hacks & prompting techniques to work faster & more efficiently.
Just 3 hours - and you become a pro at automating your workflow and save upto 16 hours a week.
This course on AI has been taken by 1 Million people across the globe, who have been able to:
Build No-code apps using UI-ZARD in minutes
Write & launch ads for your business (no experience needed + you save on cost)
Create solid content for 7 platforms with voice command & level up your social media
And 10 more insane hacks & tools that you’re going to LOVE!
Google updates disclosure policy for political ads
Source: Unsplash
One of the most significant concerns inspired by AI has to do with misinformation. Cheaply accessible chatbots severely lower the bar of accessibility for bad actors, allowing small groups of people to quickly and easily spread an enormous quantity of misinformation.
While misinformation is of course not a new phenomenon, the AI change here is one of speed, scale and realism.
With elections in focus, Google is trying to get a handle on certain elements of AI-generated content.
The details: The tech giant said in an update to its political content policy that advertisers must disclose any election ads that “contain synthetic or digitally altered content that inauthentically depicts real or realistic-looking people or events.”
Advertisers are required to select an “altered or synthetic content” checkbox, which Google will use to generate an in-ad disclosure that viewers are looking at an AI-generated ad.
Why it matters: We’ve already seen instances of chatbot-generated misinformation, from synthetic photos of candidates to deepfaked robocalls and the proliferation of false political information.
But the important factor here involves a changing dynamic — cybersecurity researchers have told me that people should no longer trust online content by default.
Need a website? Generate one with AI in seconds using Mixo.
For this week only, get 20% off all website costs using offer code deepview20 at checkout.*
Tesla shares spike on better-than-expected deliveries report (CNBC).
Apple to get an observer role on OpenAI’s board (Bloomberg).
Making money as an influencer in Cuba is hard. These dancers found a way (Rest of World).
Netflix is starting to phase out its cheapest ad-free plan (The Verge).
Fed Chair Jerome Powell: US inflation is cooling again, though it isn’t yet time to cut rates (AP News).
Report: Algorithmic bias is ‘failing our youngest’
Source: Unsplash
We’ve talked before (and often) about the non-sci-fi ways that AI can perpetuate harm against people. One of these is algorithmic bias. While the idea of algorithmic bias might be most easily encapsulated by viral instances of chatbots spitting out biased images in response to prompts, the reality of algorithmic bias (which has been going on decades before ChatGPT) looks a little different.
Over the past few years, Danish child protective services have faced an ever-increasing workload; in 2019 alone, municipalities recorded nearly 140,000 notifications of concern.
Mandates that were introduced in 2013 require workers to respond to these notifications within 24 hours to help children who are in immediate danger.
This prompted Danish child protective services to explore the adoption of a Decision Support algorithm (DSS) that operates as a predictive risk assessment tool designed to help workers handle this increased workload.
Researchers recently conducted an audit of the tool (though they were not granted access to the algorithm itself), and they found that the tool exhibits age-based discrimination.
“Overall, DSS suggests that older children are at substantially higher risk of maltreatment,” the authors said. “Any child above the age of 13, receives a risk score of a minimum of 6 solely because of their age. We believe this is an unintended and unmitigated consequence of the model, with potential age discrimination consequences if screening decisions are based on 𝑅𝑆 alone.”
The researchers said that the algorithm is “unsafe to use.” They urged organizations not to use it or similarly designed algorithms.
“We need a video editor by Tuesday, but it’ll take months to find someone in the U.S.”
We use Athyna at The Deep View — and a bunch of our friends do too. If you are looking for your next remote hire, Athyna has you covered.
The secret weapon for ambitious startups. No search fees. No activation fees. And up to 80% off hiring locally.
Incredible talent, matched with AI precision, at lightning speed.
Google emissions are spiking due to increased energy demands of AI
Source: Google
In 2020, Google made a pledge to transition to net-zero operations by 2030.
But Google on Tuesday released its annual sustainability report, which shows that — with just five (and-a-half) years to go until that self-imposed deadline — it is moving in the opposite direction.
The details: Google said it expects its emissions to go up before they fall to targeted levels.
Last year, Google’s total greenhouse gas emissions (at 14.3 million tons of carbon dioxide) represented a 13% year-over-year increase and a 48% increase compared to 2019.
The company said that this result was due to increases in “datacenter energy consumption,” adding: “As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands from the greater intensity of AI compute.”
Google said in a statement that “scaling AI and using it to accelerate climate action is just as crucial as addressing the environmental impact associated with it.” The company added that it thinks AI could be leveraged to mitigate 5-10% of its emissions by 2030.
Note: Google also pledged in 2021 to replenish 120% of the water it consumes (datacenters are thirsty in addition to being energy-hungry). In 2023, the company replenished just 18% of its consumption.
This is pretty much exactly in line with the rest of the industry — Microsoft, despite its own lofty sustainability goals, in May noted a nearly 30% increase in emissions in 2023 amid a 23% increase in water consumption. The culprit? AI.
“First Microsoft, now Google are failing to meet their climate targets, citing AI's electricity use,” Dr. Sasha Luccioni, a researcher at the intersection of AI and sustainability, said. “This is what we should be concerned about, not AGI.”
The report inspired some on social media to offer their own solution to the problem: “Turn the AI off.”
Although clean energy sources are increasing in prevalence, the world is way behind the emissions targets it needs to be hitting. The planet is more than 1.1°C warmer than it was at the start of the Industrial Revolution, and according to the UN, we are “not on track to meet the targets” stipulated by the 2015 Paris Agreement, which called for keeping global temperatures well below 2 °C.
A 2022 report from the IPCC said that emissions must peak by 2025 and reduce by close to half by 2030 for the world to keep warming below 2 °C.
“It’s now or never, if we want to limit global warming to 1.5 °C,” Professor Jim Skea, co-chair of the report, said in 2022.
In other words, we are in the midst of a desperate race against time to reduce emissions. The increases in emissions that we’re already seeing as a result of AI are certainly not helping us achieve these vital goals. They’re doing the opposite.
The common response to this — and the response that Big Tech has been touting for months — is the ways that AI can be used to boost sustainability efforts. And yes, there are many climate-positive uses of AI, most of which are centered around increasing operational efficiency to reduce energy use. But those solutions — many of which have been highlighted in our AI for Good section — are not magical. Most of these use-cases simply offer targeted data analytics that provides humans with the information needed to act.
Plus, many of these are small and narrow machine learning models that don’t emit nearly as much as the large language models that have recently become so popular.
The assumption that AI will just automatically slash emissions and save the planet is a false one.
Note: Even the applications of AI that can be used to help the climate will continue to increase emissions (this is why cost-benefit analyses when it comes to AI are so important).
Some iterations of the tech will provide certain groups with the tools needed to take adaptive measures.
But it is not a silver bullet.
We need to start being much more thoughtful about how, where and how often AI and LLMs are being applied, and how their energy impact is being mitigated.
Which image is real? |
A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Your view on the ways AI will impact jobs:
More than half of you think AI will make your job obsolete. While some of you are using AI to stay ahead, around 40% don’t think AI threatens your job security.
It will make jobs obsolete (positive):
“If all human labor is not displaced by AI, we will have failed substantially. We have the opportunity with AI, Robots, and work automation to focus on areas of interest, instead of struggling with trading our lives for money.”
It will make jobs obsolete (negative):
“I work in marketing at a translation company. For the moment, I still have a job, but the company has already replaced most linguists with AI, given all new labor cost savings to the C-suite, and is currently looking to automate everything else possible, so I'm currently not feeling too great about the future.”
Some people are calling for an AI shutdown in light of climate impacts - what do you think? |