⚙️ Report: The mirage of ‘open’ AI

Good morning. ChatGPT started breaking for some users over the weekend. The culprit? The name “David Mayer,” which, no matter how creative your prompt gets, returns an error message.

404 Media reported that the same incident occurs with the names “Jonathan Zittrain” and “ “Jonathan Turley,” two law professors.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 🌎 AI for Good: Flood prediction 

  • ⚡️ Amazon wants its data centers to double as carbon capture machines

  • 👀 Intel CEO retires amid challenging revamp

  • 📄 Report: The mirage of ‘open’ AI

AI for Good: Flood prediction 

Source: MIT

Researchers at MIT have developed a tool that combines generative AI with older forecasting approaches to generate predictions of satellite imagery following potential flooding events. 

The details: The system combines a physics-based flood model, which provides real-world parameters, and a generative AI model, to create realistic images of a given region after a storm. 

  • The researchers found that the incorporation of the physics-based model was key to the system’s performance; using a generative AI model alone resulted in hallucinations; i.e., it “generated images of flooding in places where flooding is not physically possible.”

  • In testing the completed system against models of the 2017 storm, Hurricane Harvey, the researchers found that their system produced realistic and accurate satellite images. 

Why it matters: “The idea is: one day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, said in a statement. “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”

Put an End to Those Pesky Spam Calls

There are few things more frustrating than dashing across the room to answer your ringing phone, only to see "Potential Spam" on the caller ID (probably for the third time today).

If you want to cleanse your phone of this annoyance (and increase your personal security), you have three options:

  1. Throw your phone into the ocean

  2. Individually block each unknown caller

  3. Stop spammers from getting your number in the first place with Incogni

We highly recommend option 3, and not just because electronic garbage is bad for aquatic life.

Incogni’s automated personal information removal service hunts down your breached personal information, then removes it from the web. Plus, Incogni will reduce the number of spam emails in your inbox.

The Deep View readers can get 58% off an annual plan using code "DEEPVIEW" – Get started with Incogni right here.

Amazon wants its data centers to double as carbon capture machines

Source: Amazon

Amazon on Monday said it is entering into a multi-year partnership with Orbital Materials, as part of the company’s effort to deploy both sustainable and more efficient technology in its data centers. 

Who is Orbital Materials? The New Jersey-based company launched in 2022 with a simple mission: combine generative artificial intelligence with domain expertise to “accelerate and redefine the discovery, design, development and deployment of advanced materials and climate technologies.”

  • The company, which recently clocked a “significant investment” from Nvidia’s venture capital arm, developed a generative AI system that it says dramatically reduces the lengthy trial and error phase of advanced material development. 

  • Here, Amazon Web Services will be working with Orbital to develop “new technologies and advanced materials for data center-integrated carbon removal, chip cooling and water utilization.”

Amazon said that Orbital will pilot its data center carbon removal tech next year. 

Orbital CEO Jonathan Godwin said that the company’s “partnership with AWS will accelerate the deployment of our advanced technologies for data center decarbonization and efficiency.”

The landscape: This comes, of course, amid a massive increase in data center energy demand due to Big Tech’s collective push into energy-hungry AI applications. The demand has spiked to such an extent that Google, Amazon and Microsoft are all exploring nuclear energy as a means of meeting their data center energy requirements; technology sustainability, however, has largely been abandoned in the rush and race for generative AI. 

Webinar: How to meet LLM deployment compliance standards

Learn how to:

  • Meet stringent Generative AI Governance and Compliance standards

  • Address hidden risks like bias and privacy vulnerabilities in real-time

  • Achieve today’s compliance standards while establishing a foundation for future regulatory changes and scale

  • US officials rolled out yet another round of China chip and tech restrictions, according to Semafor, blocking exports to 140 China-based companies.

  • The website theyseeyourphotos.com demonstrates just how much information Big Tech companies, in this case, Google, can glean from a single image, according to Wired.

  • How 2024’s tech trends changed our lives (Rest of World).

  • OpenAI explores advertising as it steps up revenue drive (FT).

  • The company behind Arc is building a new AI web browser called Dia (The Verge).

  • Employee lawsuit accuses Apple of spying on its workers (Semafor).

  • The beginning of the end of Big Tech (Wired).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

  • Pythia: An AI tool designed to assist founders.

  • Robofy AI: AI Chatbot builder for your website.

Intel CEO retires amid challenging revamp

Source: Intel

Intel CEO Pat Gelsinger has retired, the company said in a statement on Monday, elevating executives David Zinsner and Michelle Johnston Holthaus to co-CEO positions as the board of directors looks for a permanent replacement. 

The details: Gelsinger’s retirement comes at something of a trying time for the company; he took the helm at Intel in 2021, immediately undertaking an effort to revamp and transform the company into a semiconductor leader in the age of AI and Nvidia-centric dominance. 

  • The cost of this restructuring has been significant, however; Intel reported a loss of $16.6 billion in the third quarter of this year, following a $1.6 billion loss in the second quarter. The firm, which announced in August a plan to cut 15% — 15,000 people — of its workforce, had delivered $300 million in profits the year before. 

  • Gelsinger at the time said that the results indicated that Intel was making “solid progress” on its revamp, though he added that there was a lot more work to be done. 

Shares of Intel, which have retreated more than 50% throughout this year, spiked as much as 5% in pre-market trading, but closed the session down .5%.

The reasons behind Gelsinger’s retirement aren’t officially clear, though unnamed sources told Reuters that Gelsinger was forced out by the board, who said that he could retire or be removed.

“While we have made significant progress in regaining manufacturing competitiveness and building the capabilities to be a world-class foundry, we know that we have much more work to do at the company and are committed to restoring investor confidence,” Frank Yeary, the chair of Intel’s board, said in a statement

Report: The mirage of ‘open’ AI 

Source: Unsplash

A central debate within the AI community — and one that has made its way onto the global stage of regulatory attempts and efforts — involves so-called “open-source” versus closed models. 

Taking inspiration from the open-sourced nature of the internet — which, emerging in the ‘90s, is responsible for shaping the internet we see today — policymakers, companies and academics have argued and debated around the potential application of this terminology and approach to artificial intelligence. 

The landscape: For starters, definitions for that first term — artificial intelligence — remain murky at best. Some researchers have decided to stop using it, as the term itself is scientifically inaccurate. 

  • When we think of AI today, what we’re thinking about is not some artificial intelligence, but rather a generative interface built ontop of a Large Language Model (LLM), which refers to a deep-learning algorithm trained on enormous quantities of data. 

  • The most popular generative AI models are closed models, simply meaning details remain on lockdown; we know very little about any of OpenAI’s models, especially its more recent ones. Training data, model weights and source code remain unknown. Other industry leaders, like Anthropic and Google, largely follow this same approach. 

But then you have the so-called “open” developers; Meta, for instance, has often talked about the importance of having ‘open’ AI. And Meta isn’t wrong about that; open AI, for example, importantly enables researchers to study systems and improve upon them, increasing transparency, aiding regulators and making it easier to externally verify that systems fulfill regulatory and safety requirements. 

Meta’s models, however, aren’t truly open, despite CEO Mark Zuckerberg’s regular assertions for the importance of open-source AI. As one recent paper found, Meta’s Llama models are “at best open weights, and closed in almost all other aspects.” Likewise, Mistral, another open AI startup, has declined to “share details about the training and the datasets … due to the highly competitive nature of the field.”

The Open Source Initiative recently published a definition for open-source AI, saying that a model must have open parameters, source code and detailed data information to be considered open. 

At the same time, opponents of open-source have argued that it’s a dangerous thing to do, to essentially enable anyone to rebuild models without the safeguards that come with popular closed systems (as this article from Vox dissects). 

A new paper from Signal President Meredith Whittaker, David Gray Widder and Sarah Myers West, argues that this debate surrounding the benefits or risks of allegedly open AI completely misses the point. 

The details: First, the researchers argue that the term ‘open-source’ shouldn’t be applied to AI at all, as, even with open training data, code and model weights, the black-box nature of LLMs are such that “there are inherent limitations in the ability to predict the behavior of systems that are probabilistic.” The researchers instead use the term ‘open,’ in describing AI models and systems. 

  • Despite the OSI’s recent open AI definition, there remains a massive gradience of AI openness, according to the paper, all lumped under that same blanket term of ‘open’ or ‘open-source.’ 

  • The paper argues that even for models that check all the boxes of OSI’s definition, that is to say, are maximally open, AI development remains hamstrung by resource dependency. 

You need data centers filled with powerful GPU chips in order to train and run generative models; and Big Tech — mostly through Nvidia, Microsoft, Google and Amazon’s data center businesses — controls those resources. Once they have a model, developers need a pipeline for deployment, which, similarly, is mostly controlled through Big Tech cloud computing platforms. 

The researchers argue that even “open” systems remain closed due to the magnitude of Big Tech’s full-stack dominance, negating the possible positive impacts, such as democratization and enhanced competition, of such an approach (this is evidenced by the ever-increasing number of startup to Big Tech partnerships). 

“For this reason, the pursuit of even the most open AI will not on its own lead to a more diverse, accountable or democratized ecosystem, even though it may have other benefits … The reality is, however open it is, when AI systems are deployed at scale across sensitive domains, they can have diffuse and profound effects that should not be determined by the small handful of for-profit companies who at present control the resources required to create and deploy these systems at scale, bringing them in front of the millions of customers who will be directly affected by them, particularly when these effects cannot be foreseen simply by examining system code, model weights and documentation.”

The researchers wrote that policy intervention shouldn’t be focused on whether AI is open or closed; it should instead address the concentration of power across the AI ecosystem; they said that pinning our hopes on open AI in isolation could cause harm by “assuming that it will deliver benefits that it cannot offer in the context of concentrated corporate power.”

To this, I would only add that semantics matter deeply in the field of artificial intelligence. 

Definitions of AI systems, capabilities and development processes, presented without context and without full transparency, serve none but the corporations who have become so obsessed with this urgent need to sell the public on their generative AI products. 

Systems have been increasingly misrepresented as “intelligent” in a bid to win users, even as scientific practices have wilted — this is perhaps best exemplified by the non-peer-reviewed release-by-blog trend that has permeated the industry. 

Words, evidence and context all matter. And the context of the AI ecosystem is one of market dominance and carbon emissions, open or closed.  

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The 2nd image tray side dimensions were off relative to the coffee mug.”

Selected Image 2 (Right):

  • “I thought the depth of focus in the other image looked off.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on the Canadian media copyright lawsuit:

A third of you said it’s a slam dunk; a third of you said it has no chance.

The rest aren’t sure.

No chance:

  • “Copyright is the creation of an artificial market and has gone too far. I don't think it should be allowed to stand in the way of AI. AI is too important an innovation to us all to be impeded by copyright.”

What do you think, would localized flood prediction models inspire you to evacuate where you normally wouldn't?

Login or Subscribe to participate in polls.