• The Deep View
  • Posts
  • ⚙️ Big Tech’s carbon footprint sleight of hand

⚙️ Big Tech’s carbon footprint sleight of hand

Good morning. Good news? It’s Friday.

Less good news? Big Tech giants are emitting way more carbon dioxide than their accounting shows, due to renewable energy credits.

We dive into it all below.

Happy almost-Saturday.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: The backbone of NASA’s Mars rover

Source: NASA

NASA’s Perseverance Mars rover missions has, for around three years, been testing a new form of artificial intelligence that seeks out Martian rocks and minerals. 

The details: The software works to support PIXL, a spectrometer designed by NASA to help scientists determine whether rocks formed in conditions that could have been supportive of ancient microbial life. 

  • The AI element allows for “adaptive sampling,” where the software autonomously positions the rover’s instruments close to a target rock, then accesses PIXL’s scans to find minerals that are worth deeper examinations. 

Why it matters: “The idea behind PIXL’s adaptive sampling is to help scientists find the needle within a haystack of data, freeing up time and energy for them to focus on other things,” Peter Lawson, who led the implementation of adaptive sampling before retiring, said in a statement. “Ultimately, it helps us gather the best science more quickly.”

Developers - Deploy without P0s

Deployments work well until something breaks. Once an incident happens, chaos follows (remember Crowdstrike?).

Aviator Releases is a unified dashboard to help teams coordinate releases, cherry-picks, and rollbacks in large code bases while removing any human errors. It integrates with your existing CI/CD systems and GitHub.

Empower your engineers (and even cats 🐱) to take care of releases smoothly and safely with Aviator.

Microsoft delays ‘Recall’ feature to October

Source: Microsoft

Weeks after pausing the launch of its highly-criticized Recall feature, Microsoft will be rolling out a preview of Recall to Windows Insiders in October. The tech giant said it would share more information when the feature goes live. 

If you don’t recall: Recall was unveiled in May as a flagship feature of Microsoft’s new line of AI-enabled laptops. But it was quickly — and very publicly — ridiculed for clear privacy and security violations. 

  • The feature takes constant screenshots of a user’s screen, then uses AI to make a database of those screenshots searchable. 

  • Cybersecurity researcher Kevin Beaumont said that there were gaps in Recall’s security “you could drive a plane through.” The root of the problem is that this database of screenshots was easily accessible to hackers or users who gained remote or physical access to the computer. 

“Recall enables threat actors to automate scraping everything you’ve ever looked at within seconds,” he said at the time. And to make matters worse, Recall was initially enabled by default. 

The latest: In June, Microsoft delayed the launch and addressed some concerns, changing Recall to opt-in and adding security and encryption layers. 

In its update this week, Microsoft said that “security continues to be our top priority.”

Accurate, Explainable, and Relevant GenAI Results

Vector search alone doesn’t get precise GenAI results. Combine knowledge graphs and RAG into GraphRAG for accuracy, explainability, and relevance.

Read this blog for a walkthrough of how to:

  • Work with GraphRAG

  • Build a knowledge graph

Take the next steps to achieve better GenAI with Neo4J today.

  • Skyfire, an AI agent startup, launched this week with $8.5 million in seed funding.

  • Conversational AI startup SleekFlow raised $7 million in Series A funding.

  • Perplexity AI plans to start running ads in fourth quarter as AI-assisted search gains popularity (CNBC).

  • Intuit forecasts annual revenue above estimates on AI-driven financial tools (Reuters).

  • Body of British tech billionaire Mike Lynch recovered off the coast of Sicily (The Verge).

  • US feds are tapping a half-billion encrypted messaging goldmine (404 Media).

  • A surprisingly high number of workers say they’re OK with their employer listening in on their work chats and email (Fortune).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

Neuralink completes its second brain implant surgery

Source: Neuralink

Several months after successfully implanting its first patient with a brain chip, Neuralink, Elon Musk’s brain chip startup, has completed its second surgery

The details: Neuralink said that the patient — named Alex — was discharged a day after the surgery and is recovering well. Neuralink said that through the device, he is already improving his ability to play video games and design objects with 3D software. 

  • Neuralink said that it is working on decoding multiple clicks simultaneously, and is also developing an algorithm to detect handwriting intent for faster text entry. 

  • The company also plans to combine the device with physical hardware — such as a robotic arm — that would enable patients to regain more independence. 

“The Link is a big step on the path of regaining freedom and independence for myself,” Alex said in a statement. 

The context: What’s on display here is a brain-computer interface (BCI). The general idea is that sensors, loaded up with AI technology, are able to read and then decode brain signals, which computers can then process. 

  • Neuralink has faced serious scrutiny from physicians for its alleged practices and potential health risks due to the invasiveness of the procedure. 

  • Musk, meanwhile, has continuously hyped the non-medical capabilities of future iterations of his device. One medical ethics professor last year said that “there’s still so little known about the brain that getting people’s hopes up about what’s possible in the near term may be misleading and may lead to skepticism around neurotech.”

There are other BCI options out there, meanwhile, that are noninvasive (they record brain activity from the scalp), and others that are experimenting with less-invasive methods (a soft electrode that sits on top of the brain, rather than being inserted into it). 

Big Tech’s carbon footprint sleight of hand

Source: Google

At the heart of what researchers — and civilians, for that matter — want from the AI industry is transparency. Largely, we haven’t gotten it. 

Models are locked up, training data comes out only through small leaks to the press and research isn’t shared. A somewhat separate element of this involves information on carbon emissions and energy use; all we know for sure is that, due to massive investments in AI — which requires far more energy to run than straight-up cloud or internet applications — energy use is increasing. 

But the details of that remain thin on the ground. 

This was one of the first things Dr. Sasha Luccioni — a climate and AI researcher — told me, more than a year ago, when I started looking into the intersection of AI and the climate more closely: “it's hard to do any kind of meaningful studies on (LLMs) because you don't know where they're running, how big they are. You don't know much about them."

Here’s what we do know: We know that investments in AI are pushing Big Tech further from their climate goals, because that’s what Big Tech has told us. In 2023, Google’s emissions increased 13% year-over-year to 14.3 million tons of carbon dioxide. 

  • “As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands from the greater intensity of AI compute,” Google said. 

  • Microsoft noted a 30% increase in emissions. And both companies noted steadily expanding water consumption as well. 

We also know that electricity demand has been surging for the first time in two decades, and it’s due to this universal push for energy-hungry data centers (demand has stayed flat because advances in technology have made us more and more energy efficient. Those advances can no longer keep up). 

  • The above is a point that doesn’t make its way into headlines too often, but was shocking when I first heard it. It bears repeating, loudly, and often. 

A recent Bloomberg report found that several Big Tech giants have been reporting numbers that don’t quite add up. 

On paper VS. in atmosphere: Several of these companies rely on credits (renewable energy certificates, or RECs) that make carbon emission accounting look a lot better. 

  • RECs are issued by renewable energy providers as a way to track energy sources. But companies can purchase unbundled RECs, which allow them to account for renewable energy without using it — it was thought that doing so would increase renewable generation, which is why it’s allowed. 

  • But plenty of research has found that it doesn’t do this — it instead “encourages companies to buy green energy certificates and claim zero total emissions from electricity use.”

Bloomberg Green’s analysis found that, if companies didn’t count unbundled RECs, Amazon’s 2022 emissions would be 8.5 million tons higher than reported, Microsoft’s would be 3.3 million tons higher than reported and Meta’s would 740,000 tons higher than reported. Google has phased out its use of unbundled RECs. 

None of the companies denied the practice, with spokespeople simply saying that they plan to phase out their use of unbundled RECs at some point. 

In 2022, researchers found that “the widespread use of RECs … allows companies to report emission reductions that are not real.”

These fake numbers could give investors and users the false impression that an AI query is fine, climate-wise, since it’s powered by what Big Tech calls renewable energy, but actually isn’t. 

We need transparency. We need regulation. 

We need to remember that we are in a desperate race to reduce our global emissions and limit warming to 1.5 °C. 

Now is not the time to boil the oceans for artificial promises, and now is certainly not the time to tolerate obfuscatory accounting of just how much we are boiling those oceans.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (left):

“Image 2 looks fake because the person’s hands don’t look like they’re positioned correctly for the pockets on that type of rain jacket. Secondly, the reflection of the person’s legs in the water doesn’t follow the natural V shape you would expect to see.”

Selected Image 2 (right):

“The yellow overall on Image 1 is not wet, in spite of the rain.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on AI in journalism:

Around a third of you don’t care how it’s used, so long as it’s disclosed; 22% of you said that targeted applications (like the WaPo investigation) are okay.

Just slightly more than 10% don’t want journalists to use AI for anything, and around 15% just don’t want journalists to use it for article writing.

Use it for whatever, just disclose it:

  • “With a 25-year career in published IT and business process research, clients/audiences today more than ever need to trust published outputs. By citing sources, methods of gathering data and tools used to generate as well as edit content, authors can build and maintain trust among their audiences more effectively.”

Don’t use it at all:

  • “You lose the human touch introducing AI into journalism. AI can't resonate with aspects of an article like a human could and using it for research could mean that you lose major information that an AI doesn't deem relevant because it has no emotion tied to anything.”

Would you get a brain implant if you don't have a significant medical reason for it?

Login or Subscribe to participate in polls.