⚙️ How one energy company is utilizing AI

Good morning. Today marks my 100th edition.

I have now come to you 100 times, bringing you 400 stories that add up to — according to my very rough calculations — approximately 150,000 words. That’s like 2 whole books. 

Or one very large book (or 100 very small books). 

Anyway, appreciate you all for following along, sinking into the stories and thinking hard about AI.

We’ve got a ton of exciting stuff coming up the pike.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: Colorado’s enhanced emergency response

Source: Unsplash

Last month, Colorado’s Jefferson County deployed a cloud-based call platform designed to increase the speed and efficiency of its emergency responses. 

The details: The platform, developed by Carbyne, comes amid staffing shortages and steady increases in call volumes. 

  • A key component of the platform is its AI-enabled real-time translation capabilities, “enhancing response times for non-English speakers by up to five minutes.”

  • The system also comes with a “call triage solution,” which mitigates bystander reporting during major incidents, enabling the staff to respond to higher-priority emergency calls. 

The platform enables video calling and text messaging and is able to locate callers with a high degree of accuracy, speeding up response times. 

Michael Brewer, deputy director of Jeffcom 911 said the system “allows us to stay ahead of the curve and perform better.”

New AI agents are emerging daily, but how many actually work? Numbers Station releases an exclusive look at how they deliver analytics agents to their customers. Their agents work in unison so users can query across different data sources.

The fleet of agents:

  • The Search Agent – unifies fragmented analytics assets (dashboards, schemas, warehouses, email) to find one that answers the question.

  • The Diagnostic Agent – goes beyond the capabilities of existing dashboards and initiates a dynamic drill down to find the root cause of a trend.

  • The Next Best Action Agent – recommends an action for the user to take based on its findings.

  • Finally, a Tool Agent – implements the suggested action by plugging into external tools (e.g. a GDrive agent to create and share a presentation with other teams), so you don’t have to.

Skip the waitlist and request a demo to see all the agents in action.

The OpenAI chaos continues

Source: OpenAI

On Wednesday, OpenAI CTO Mira Murati said that, after more than six years, she is departing the company. 

Shortly after that announcement, research chief Bob McGrew and research vice president Barret Zoph also announced their resignations; CEO Sam Altman said in a statement that the resignations weren’t related, it just “made sense to now do this all at once, so that we can work together for a smooth handover to the next generation of leadership.”

There’s more. At the same time, Reuters reported that OpenAI is working to restructure its company into a for-profit organization. 

  • The non-profit would still exist as a minority stakeholder, but the corporation would no longer be controlled by OpenAI’s nonprofit board. 

  • As part of this restructuring, Altman would gain equity in OpenAI, which might soon be valued at $150 billion. 

The context: A little less than a year ago — when OpenAI’s board fired and then rehired Altman — the Game of Thrones-esque saga of OpenAI began and has been running hot ever since. Co-founder Ilya Sutskver and former safety leader Jan Leike left in May, co-founder John Schulman left last month, co-founder Greg Brockman is on leave and Andrej Karpathy is also gone. 

This cannot be a reassuring sign to the investors OpenAI is currently courting.

Webinar Today: Understanding AI Trust and Hallucinations

Last chance! We're hosting a webinar today 1PM ET / 10AM PT in collaboration with OpenAI, focusing on a critical aspect of AI adoption: increasing trust and preventing hallucinations.

This in-depth session will offer exclusive insights into:

  • The underlying causes of AI hallucinations

  • Ada & OpenAI's innovative approaches to addressing this challenge

  • Strategies for enterprise-level Customer Service scaling while mitigating risks

  • Boost your software development skills with generative AI. Learn to write code faster, improve quality, and join 77% of learners who have reported career benefits including new skills, increased pay, and new job opportunities. Perfect for developers at all levels. Enroll now.*

  • Interested in working with a Customer Service AI Agent trusted by enterprises to serve billions? See how Ada effortlessly automates customer service across messaging, email, voice and SMS. Sign up for a free trial!*

  • NYC Mayor Eric Adams indicted in campaign finance case (CNBC).

  • Alphabet, Goldman Sachs and others to settle charges over late filings, SEC says (Reuters).

  • Meta’s big tease (The Verge).

  • Why Fanatics is losing investor fans (The Information).

  • Climate concerns rise as COP29 approaches (Semafor).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

Study: LLMs d0n’t 7nd3rst8nd the w0rld b3hind w0rds (but you do)

Source: A screenshot of a chat I had with GPT-4o.

Large language models (LLMs) are, very basically, massive next-word prediction engines. Their ability to do this grants the illusion of intelligence, something that’s compounded by our human willingness to assign the impression of intelligence to language. 

What happened: A recent study — co-authored by cognitive scientist Gary Marcus — sought to further examine this through something called leetspeak, a method of writing where letters are substituted w1th n7mb3rs (with numbers) to make a given text harder to decipher. 

  • Since “leetspeak alters precisely the form — the very essence of LLM training — but not the meaning” of the words, the researchers predicted that humans would perform far better in both accuracy and reasoning. They were right. 

  • They first prompted several different AI models with leet-based prompts, asking each model to explain each answer it gave — this enabled the researchers to grade each model. ChatGPT-4o performed the best, with an accuracy of .79 and a reasoning score of .54, indicating that “often the LLMs do not correctly explain their behavior.”

But when compared to a trial of 50 humans, even the best AI couldn’t hold up (the humans scored an average accuracy rate of .95 and a reasoning score of .89)

Why it matters: This adds further weight to the idea that LLMs, far from human-level, are incapable of mastering meaning. “It is possible that these diversions entail a potentially fundamental and irreconcilable divide between synthetic and organic computational systems, pointing to concerns that go beyond scalability,” the paper reads. 

The key point is the disparity between accuracy and reasoning — go test this out, but ask the model to “list the numerical substitutions” after it answers. In running this test myself, I found that 4o is highly accurate in decoding leet, but can’t ever fully reason why. 

How one energy company is utilizing AI

Source: National Grid

At its core, machine learning and artificial intelligence technologies are all about enhancing efficiencies. Even as the data centers that power AI are regularly adding complications to energy utilities and the grid, some of the innovations offered by hyper-targeted applications of AI are helping these utilities apply desperately needed efficiency upgrades. 

For years, National Grid Partners — the corporate venture capital arm of the U.K. and U.S.-based utility company National Grid — has been identifying and investing in companies whose technological innovations can be applied to improve the company’s energy assets.

Pradeep Tagare, VP of investments at National Grid Partners, told me that the venture capital arm has invested $450 million over the last five years or so in around 50 companies. He said that artificial intelligence will impact “pretty much every aspect of our operations.” 

The details: National Grid partners has already deployed a number of AI-driven technological innovations, spanning everything from autonomous drones to robotic ‘dogs.’ 

  • The company deploys autonomous drones to inspect substations and other hardware, something that improves maintenance efficiencies. National Grid is also working with a startup — through its VC arm — called LineVision, which uses lidar cameras and machine learning to identify sagging power lines. It can be used to increase capacity flow by around 40%. 

  • In addition to predictive maintenance algorithms, the firm also uses a system — developed by AiDash — that allows for intelligent vegetation management through satellite imagery and AI models. Tagare said this allows for enormous improvements in safety and cost management.

The robotic ‘dog,’ — Spot — is leveraged to identify potential problems in substations, something that can now be done without risking human lives. 

Renewables: Tagare told me that AI also plays a significant role in making renewable energy accessible. With renewable energy reaching cost parity with traditional energy sources, he said that the biggest challenge isn’t generating more renewable capacity, but connecting that capacity to the grid. 

The problem is that you “have to worry about balancing the grid,” according to Tagare. This led National Grid Partners to AutoGrid, a startup that runs machine learning models on a variety of data points — that examine everything from the load on the grid at a certain time, to the historical load, to weather forecasts, etc. — which allows for renewable energy sources to be connected to the grid “at the right time without causing the grid to destabilize.” 

  • The next evolution of this — which Tagare added is still “way out” — involves creating AI models of the grid that would enable automated, dynamic grid balancing. 

When we talk about AI and energy, we’re talking about two things: the first is sustainability, and the second is grid functionality. On the functionality side, Tagare said that the grid was “not designed” for this massive, “exponential” increase in energy demand coming from the AI sector. 

  • These AI-based innovations — specifically in LineVision — are designed to squeeze out more capacity to meet this demand while new infrastructure is being built. 

But on the sustainability side, Tagare said that — for National Grid — part of the answer lies in renewables and the technical innovations that enable utilities to bring more renewables onto the grid. “Our belief is that you can get there without necessarily having to compromise on your net-zero goals,” he said. 

Now this comes as the details of Big Tech’s progress on those net-zero goals remain obscured, and as the details of the environmental impact of AI remain, also, obscured. Researchers have made educated assumptions on this point, but the companies are staying mum

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 2 (Left):

  • “Fake one had too much clutter on the tarmac around the plane. Looks like pallets or cargo, not luggage, but the plane is a passenger flight.”

Selected Image 1 (TK):

  • “I did NOT go off the clouds today! They're both really good but I thought the lighting on 2 was off.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on the content authenticity initiative:

A third of you would feel way better if every piece of digital content had some sort of content authenticity tag on it. 22% don’t really care and 13% wouldn’t feel any better at all.

11% said such a model won’t ever become widespread — here, I would think about the evolution from HTTP to HTTPS, with the S standing for secure, which gradually became the norm today on the internet.

Absolutely:

  • “But how long would it take for the bad guys to create a fake authentication label?”

My only consolation there is that it would likely take quite a bit of work (something the bad guys might not be interested in), and the idea is to hold everything to a single, open-source protocol that couldn’t be legitimately replicated unless it’s used legitimately.

Something else:

  • “In a digital world where every second matters and anything can be digitally rendered … trust will likely always be an issue … the only truly safe solution is to just unplug — so accepting the digital risk is simply part of life, I suspect.”

What do you think about Colorado's AI-powered call center?

Login or Subscribe to participate in polls.