- The Deep View
- Posts
- ⚙️ Google goes nuclear
⚙️ Google goes nuclear
Good morning. The New York Times sent a cease-and-desist letter to Perplexity, asking that the AI search startup stop using its content.
This marks the second AI entanglement for the Times, which sued OpenAI last year for massive copyright infringement.
In other news, today, we’re talking about nuclear energy — and the ways in which Big Tech is almost single-handedly bringing it back.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🌎 Google goes nuclear
MBZUAI Research: Evaluating safeguards in LLMs
Source: Created with AI by The Deep View
The jump from the development of Large Language Models (LLMs) to their deployment has proven to be ethically fraught.
LLMs are known for their propensity to display a range of biases, reproduce copyrighted content and manipulate people. Such models have the capacity to aid cyber criminals and other bad actors in mass cyber-attacks, fraudulent scams and propaganda campaigns.
In order for developers to mitigate these elements through the enactment of targeted guardrails, they first need to identify them. While closed models, such as ChatGPT, tend to include guardrails that were constructed based upon datasets of known harmful prompts, open-source models tend to lack those guardrails.
What happened: Researchers from the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) are seeking to bridge that gap, providing a comprehensive dataset for the evaluation of safeguards specifically for open-source models.
Covering a range of mild to extreme risks, the dataset — nicknamed “Do Not Answer” — includes a list of nearly a thousand prompts where “all instructions should not be followed.” The types of harm identified in the dataset “indicate the specific vulnerabilities the LLM should mitigate.”
The researchers said that LLMs should “either reject, refuse to respond to, or refute the stance of all such questions.” They collected close to 6,000 responses to their list of problematic prompts.
The researchers defined and collected responses that they deemed to be “safe and responsible” which they used to assess the safeguards of several different LLMs. They went on to test automated approaches to assessing these safety mechanisms.
“Notably, our findings revealed that a suitably-trained small model can effectively perform the evaluation, yielding results that are comparable to those obtained using GPT-4 as an evaluator.”
To learn more about MBZUAI’s research visit their website.
Customer service startup Neuron7 secured $44 million in Series B funding.
AI observability startup Galileo raised $45 million in Series B funding.
National Archives pushes Google AI on employees (404 Media).
Four ways to advance transparency in frontier AI development (Time).
Amazon is using tech from a Khosla Ventures-backed startup to run robot warehouses at Whole Foods (CNBC).
Apple just announced a new, faster iPad Mini (The Verge).
OpenAI’s charity could soon be worth $40 billion (The Information).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
Product Owner, Digital Advisory Platforms: UBS, New Jersey
Director, AI Products, Cybersecurity: SimSpace, Remote
Success.ai: A platform that connects you with hundreds of millions of professionals.
DuckDuckGo AI: A privacy-centric search engine with AI chat.
Semiconductors fall
Source: Unsplash
In reporting third-quarter earnings on Tuesday, chip equipment maker ASML reported lower-than-anticipated sales for 2025, cautioning investors about the remaining weakness in the semiconductor sector.
"While there continue to be strong developments and upside potential in AI, other market segments are taking longer to recover,” CEO Christophe Fouquet said in a statement. “It now appears the recovery is more gradual than previously expected. This is expected to continue in 2025, which is leading to customer cautiousness.”
ASML, one of Europe’s largest tech firms, supplies the equipment used to manufacture chips.
The ripple effects of the report — and the firm’s subsequent stock dive — were felt on Wall Street, where a number of major U.S. semiconductors fell in kind.
Shares of Nvidia retreated more than 5%, a significant pullback from the record highs the firm notched on Monday. AMD, Intel, Arm, Broadcom and Micron fell between 2% and 5%.
The semiconductor retreat also comes alongside a report out of Bloomberg News that the U.S. is considering applying a cap on export licenses of AI chips to certain countries.
Google goes nuclear
Source: Kairos
This week, Google signed an agreement with nuclear energy startup Kairos Power to purchase around 500 megawatts of power, something Google views as an essential step in meeting the rising energy demand of its AI technologies.
The details: The agreement — the financial terms of which were not disclosed — aims to bring Kairos’ first Small Modular Reactors (SMRs) online by 2030, followed by additional deployments through 2035.
“The grid needs new electricity sources to support AI technologies,” Google said in a statement. “Nuclear solutions offer a clean, round-the-clock power source that can help us reliably meet electricity demands with carbon-free energy every hour of every day.”
Google said that this new nuclear approach will be coupled with ongoing investments in other green energies, including wind and solar.
It’s not clear yet where the reactors will be built, with Kairos saying only that they will be placed in “relevant” locations.
The context: This comes, as we’ve talked about regularly, in the wake of steadily increasing energy requirements (and carbon emissions) from the data centers that are training and powering AI tech.
These new energy demands have pushed a number of tech giants, Google and Microsoft included, farther from their net-zero carbon emissions goals, a sacrifice they have deemed worthwhile considering their excitement regarding advancements in AI. It’s worth noting that investments and dominance in AI have served as quite a stock booster from early 2023 on.
Though data centers currently consume around 2% of global energy consumption, Goldman Sachs expects data center energy demand to grow by at least 160% by 2030; the carbon emissions of the sector, could, in turn, double by that date. (A single ChatGPT query is about 10x as energy-intensive as a single Google search, for comparison).
According to the International Energy Agency, data centers consumed around 460 terawatt-hours (TWh) of electricity in 2022. This number is expected to more than double to 1,000 TWh by 2026.
For context, in 2019, China consumed around 6,500 TWh of electricity, and Canada, Brazil, Germany and France each consumed around 500 TWh; the data centers that support AI are already consuming the electrical equivalent of a country, and that is expected only to rise.
In light of this — and the carbon emissions and grid destabilization that has come in hand with it — Big Tech has increasingly begun turning to nuclear energy.
Just last month, Microsoft signed a deal to restart one of the nuclear reactors at Three Mile Island, the site of the country’s worst nuclear meltdown; in March, Amazon purchased a nuclear-powered data center from Talen Energy.
Let’s talk nuclear: The SMRs that Kairos offers are, essentially, just smaller than normal nuclear reactors. This makes them more affordable and more flexible — in construction and placement — than traditional reactors.
Kairos’ reactors use molten fluoride salt as a coolant rather than water; further, their systems don’t require electricity to remove heat from the reactor core, meaning emergency cooling in the event of an incident is far more simplified.
Still, nuclear energy is not exactly zero carbon, it just emits far less than the production and processing of fossil fuels. The construction of reactors, as well as the mining and processing of nuclear fuel, do cause emissions.
Then, of course, there’s the radioactive waste that’s a necessary byproduct of nuclear fission. According to the Nuclear Energy Institute, this waste, once fully cooled in massive pools, gets stored on-site in massive concrete and steel containers called “dry casks.”
“The energy density of nuclear fuel means that nuclear plants produce immense amounts of energy with little byproduct,” the NEI wrote. “In fact, the entire amount of waste created in the United States would fill one football field, 10 yards deep. By comparison, a single coal plant generates as much waste by volume in one hour as nuclear power has during its entire history.”
Nuclear is not a bad thing, by any means.
The problem here is that these additional resources are greenifying the massive, additional energy demands coming from the data center; they are not reducing our current consumption.
Clean energy needs to very quickly expand far beyond catching additional requirements to actually reduce total demand. Regularly increasing that demand — provided it is covered by green energy — is not the way forward on that front; we need greater energy efficiencies and a reduction in demand so that clean energy can actually begin to make an impact.
And we need that to happen fast.
What will be interesting to watch, though, is, out of this energy desperation, the ways in which these massive Big Tech investments in nuclear power might actually kickstart a new nuclear energy revolution.
Which image is real? |
🤔 Your thought process:
Selected Image 2 (Left):
“Too many veins on the image one pumpkins. I’ve never seen a pumpkin with so many close veins like that. I took me under a minute to decide.”
Selected Image 1 (Right):
“It is difficult to contrast AI from real in images with very narrow depths of field. Still, it’s very unusual for AI to generate a pumpkin with dirt on it. That must have been a specific prompt, but the dirt was what made me think it was real.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on Nvidia’s growth:
27% of you think there’s no ceiling on Nvidia’s growth; 27% think it’ll collapse when the bubble bursts.
The rest aren’t really sure.
I guess we’ll have to wait and see.
Have you encountered clear safety violations in LLM output? |