• The Deep View
  • Posts
  • ⚙️ Paris AI Summit: The ‘Smoke and mirrors’ of governance

⚙️ Paris AI Summit: The ‘Smoke and mirrors’ of governance

Good morning. There has been a debate on Twitter that using em-dashes (—) is a sign of AI-generated ‘writing’ …

The em-dash is my favorite form of punctuation, followed closely by the semicolon. Been using ‘em since a middle school English teacher pointed out that I did, in fact, have the option.

Slander.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 🌎 AI for Good: Vulnerability mapping

  • 👁️‍🗨️ OpenAI debuts new product roadmap, teases GPT-5 

  • 🏛️ Paris AI Summit: The ‘Smoke and mirrors’ of AI governance

AI for Good: Vulnerability mapping

Source: WFP

The core of the United Nations World Food Programme (WFP) revolves around targeting; its mission of fighting hunger around the world necessarily requires the allocation of finite resources. 

Several years ago, the WFP launched something called GeoTar, a geospatial targeting tool that examines such factors as climate change, agricultural capacity and service utilization to identify those areas most in need of humanitarian aid. 

What happened: The WFP said this week that it has been selected to participate in IBM’s Sustainability Accelerator, a pro-bono program that will provide the WFP with tech, services and funding. 

  • The partnership will be focused around the GeoTar tool, with IBM aiming to enhance the tool with more advanced data and AI capabilities, features that the WFP said will improve its ability to shore up food security around the world. 

  • The partnership will last for the next two years, during which the pair will workshop and implement the best technological approach. 

GeoTar is currently implemented in Afghanistan, Chad and Bangladesh, though the WFP hopes to expand the tool to other regions soon. 

GAIN 1 HOUR EVERY DAY WITH AN AI ASSISTANT

Fyxer AI organizes your inbox, drafts extraordinary emails in your tone of voice, and writes better-than-human meeting notes.

Designed for teams using Gmail, Outlook, Slack, Teams, Google Meet, or Zoom.

OpenAI debuts new product roadmap, teases GPT-5 

Source: OpenAI

The news: OpenAI CEO Sam Altman wrote in a post Wednesday that “GPT-4.5” is on the way.

The details: This model, according to Altman, will be the final non-chain-of-thought model OpenAI intends to release. It is the same model that was internally referred to as Orion, a model that in November was reportedly giving OpenAI some trouble

After that release, Altman said the paramount goal for OpenAI is to unify its “o-series” of chain-of-thought ‘reasoning’ models with its GPT family of Large Language Models (LLMs), by shipping “systems” that can tap into all of OpenAI’s options. 

  • Altman said OpenAI will release the long-awaited GPT-5 as a system “that integrates a lot of our technology, including o3.” OpenAI will no longer ship o3 as a standalone model.  

  • GPT-5 will be available (and unlimited) to free ChatGPT users at a “standard intelligence setting,” while Plus and Pro users will be able to use a higher intelligence setting, whatever that means. 

The timeline for the release of GPT-4.5 and GPT-5, according to Altman, is weeks and months, respectively. The training data, energy intensity and associated carbon emissions of the products in play are, as always, unclear. 

The landscape: This comes at a time when, on the heels of the DeepSeek tremor, western developers have suddenly become tasked with reassuring their investors that they will remain dominant. It also comes shortly following promises from governments around the world — the U.S., EU and France, specifically — to plow hundreds of billions into AI infrastructure.  

Altman is very good at focusing people’s attention. GPT-5 has been on the horizon for so long that I am thoroughly unsurprised he’s bringing it back into action now, like a magician reaching around in his hat for a rabbit. To me, this roadmap indicates just how much DeepSeek shook the walls of OpenAI. In his vague mentions of GPT-5, Altman is not describing a massive new breakthrough; just the integration of OpenAI’s existing tech, which remains susceptible to hallucination and algorithmic bias.

As Google DeepMind researcher Juliette Pluto wrote: “given that GPT-5 isn’t a model, this means nothing.” 

Today’s Fastest Growing Company Might Surprise You

🚨 No, it's not the publicly traded tech giant you might expect… Meet $MODE, the disruptor turning phones into potential income generators. Investors are buzzing about the company's pre-IPO offering.1

📲 Mode saw 32,481% revenue growth from 2019 to 2022, ranking them the #1 overall software company on Deloitte’s most recent fastest-growing companies list2 by aiming to pioneer "Privatized Universal Basic Income" powered by technology — not government. Their flagship product, EarnPhone, has already helped consumers earn & save $325M+.

🫴 Mode’s Pre-IPO offering1 is live at $0.26/share, and 20,000+ shareholders already participated in its previous sold-out offering. They’ve just been granted the stock ticker $MODE by the Nasdaq1, and  you can still invest in their pre-IPO offering at just $0.26/share before it closes.

  • A recent WSJ poll of CIOs found that, though 61% are experimenting with AI agents, 21% aren’t using them at all. A key concern is reliability, something that’s not really surprising given the risk of hallucination in critical business areas.

  • The EU has withdrawn draft proposals that, among other things, would have instated AI liability regulations, something that notably comes shortly following the bloc’s push for AI investment.

  • Google will use machine learning to estimate a user’s age (The Verge).

  • Pakistan’s electric rickshaws are accelerating the country’s EV revolution (Rest of World).

  • California’s problem now isn’t fire — it’s rain (Wired).

  • Neuralink competitor Paradromics secures investment from Saudi Arabia’s Neom (CNBC).

  • Adobe Firefly unveils first video generation model that it says is "safe to use" (TechRadar).

The Paris AI Summit: The ‘Smoke and mirrors’ of governance

Source: Unsplash

Nearly two years ago, I watched cognitive scientist Dr. Gary Marcus and OpenAI chief Sam Altman stand before the U.S. Senate, briefly united in testimony regarding the rising specter of artificial intelligence. One of the few points the two men agreed upon was that policy regarding AI safety must be an international, “intergovernmental,” cooperative effort, similar to CERN, the European Organization for Nuclear Research. 

In the months since, models have become both more powerful (as measured by benchmark tests) and more ubiquitous; artificial general intelligence (AGI), meanwhile, if the tech executives are to be believed, could arrive as soon as next year. 

Despite the fact that AGI remains a scientifically dubious, hypothetical technology, and despite the flaws and limitations of current systems, the world’s leading AI labs are openly striving to achieve systems that match or exceed human capabilities, an effort that has proceeded without governance or oversight. 

If you thought that the Paris AI Summit would interject a modicum of clear oversight into this environment, you thought wrong (though I’ll forgive you for your optimism). The main takeaway from this third global AI summit is simple: the ideals of international governance have been sacrificed to the all-consuming fires of the AI Race. 

Here’s what happened: Where the 2023 Bletchley Summit and the 2024 Seoul Summit focused extensively on the myriad risks posed by AI, the Paris Summit was little more than an “opportunity for French nationalism and claims to primacy as much as it was one for the United States,” according to Tim Hwang, a research fellow at Georgetown’s Center for Security and Emerging Technology. 

  • French President Emmanuel Macron — who announced more than $110 billion in private AI investments this week — said during the summit that France is back in the AI Race. It was a theme echoed by European Union President Ursula von der Leyen, who announced more than $200 billion in AI investments across the EU alongside a firm pronouncement that people shouldn’t count the EU out of the AI Race, yet. 

  • U.S. Vice President JD Vance, meanwhile, made quite clear during a speech at the summit that “the AI future will not be won by hand-wringing about safety.” Vance said that the Trump Administration is supremely disinterested in restricting the development of the tech. Neil Chilson, Head of AI Policy at the Abundance Institute, called it “one of the most pro-innovation speeches I've ever heard from an elected politician.” Vance is, among other things, a former tech venture capitalist.  

The outcome: The vague statement that resulted from the two-day conference focuses first on “making innovation in AI thrive” and boosting the accessibility of the tech. There is a brief mention of developing open, ethical, transparent systems, a brief mention of the unsustainable energy requirements associated with AI and no mention of any of the risks or threats associated with the technology. 

“A lot of what's being discussed at the moment lacks substance, definitely, and I think the reason is the whole thing is running way ahead of policymakers,” Dr. Seena Rejal, CCO of the AI startup NetMind, told me. “They can't really keep up with it because I don't think they really understand it at its core. So that is really quite dangerous.”

  • The statement was signed by 60 countries, including China, France, Japan and the United Arab Emirates. Notable absentees on that list of signatories included the U.K. — which said it lacked vital clarity on global governance — and the U.S. 

  • Computer scientist Yoshua Bengio and Anthropic CEO Dario Amodei — each of whom has said, despite a lack of clear evidence, that AGI is coming very soon — called the conference a missed opportunity to actually address any real risks.  

AI expert Dr. Andriy Burkov, meanwhile, panned the summit as being “as ridiculous as a Paris Teleportation Summit or a Paris Time Travel Summit … at this point, everything you have is unsubstantiated speculations about what AI can be, based on LLM demos as applied by crooked CEOs (and their henchmen aka crooked influencers) to cherry-picked examples from the training data.”

Rejal explained that there’s a “lot of smoke and mirrors” at play here, in which companies are holding up the “holy grail of AGI” as a means of securing enormous quantities of investment while ensuring a deregulatory environment. “There’s definitely a sleight of hand going on, and people are playing into it.” 

“You're in a situation where this is so technically advanced and yet so massively disruptive that a small percentage of people really understand what it does and what it can do, and everyone else is sort of caught up in the fog of this,” Rejal added. “And either they're really scared or they're really excited, but they don't know why.”

He doesn’t think governance is dead, though, just that it needs to massively mature. 

“And I think it will, it will, in due course, (but) it will take disasters … before we take things seriously and actually commit to something internationally that is more than just a show of dominance by one country or one bloc visiting another.” 

I remain skeptical of AGI and of the existential risk associated with it for a number of reasons, beginning with a lack of definitions around what constitutes either AGI or the human intelligence it aims to replicate, and running through resource limitations, the complexity of true cognition, the difference between the flexibility of human intellect and the brute-force mimicry of large language models. 

However, companies explicitly aiming to develop something of this scale ought not be allowed to do so without oversight; and, far more importantly, there are threats associated with current AI that must be governed, rather than overlooked (over-reliance, algorithmic bias, energy intensity and associated carbon emissions, etc.). The impact of the broad adoption of this flawed and misunderstood technology has already been significant, while the impact of these global summits has been nonexistent. 

Clearly, there is little will for international oversight. There is just the Race. Which means we, as consumers, must be vigilant, and we must put our hopes in individual countries and states to enshrine some policy that protects human rights in the midst of this proliferation.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The seagulls in Image 2 were all wrong, especially their sizes.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on your ability to detect deepfakes:

40% of you can detect deepfakes some of the time; 27% can most of the time, but less and less lately; 15% say they can’t be fooled by AI and 1% say the AI always fools them.

Sometimes:

  • “I'd like to think I was better at this, but the daily ‘test’ on images makes me face reality — deepfakes are really good and getting better every day.”

Most of the time:

  • “Even studious investigations to catch all the old tricks (number of fingers, inconsistent lines or backgrounds, reflections) have been dwindling over time. The ‘AI or Not?’ quiz has tricked me a few times, and I learned a new AI capability each time.”

Do you think OpenAI will ever actuallyy release GPT-5?

Login or Subscribe to participate in polls.

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

Mode Mobile Disclaimers

1 Mode Mobile recently received their ticker reservation with Nasdaq ($MODE), indicating an intent to IPO in the next 24 months. An intent to IPO is no guarantee that an actual IPO will occur.

2 The rankings are based on submitted applications and public company database research, with winners selected based on their fiscal-year revenue growth percentage over a three-year period.

3 A minimum investment of $1,950 is required to receive bonus shares. 100% bonus shares are offered on investments of $9,950+.