- The Deep View
- Posts
- ⚙️ The Nobel Prize and the mainstreaming of AI’s X-Risk
⚙️ The Nobel Prize and the mainstreaming of AI’s X-Risk
Good morning. Geoff Hinton — known as the Godfather of AI — won the Nobel Prize in Physics yesterday. He’s also a major believer in the existential risk of the AI he helped build.
So, naturally, I dove into the X-Risk discourse. Got a longer-than-normal read for you today. There’s a lot of context.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
AI for Good: Safer space travel
Source: Stanford University
In light of the enormous complexities — and tremendous costs and risks — associated with space travel, researchers at Stanford are working to develop AI-based algorithms specifically designed to achieve computationally precise trajectories for vehicles in space.
The details: A team of aerospace engineers at the university in March presented a system called ART — the Autonomous Rendezvous Transformer — which they said is the first step toward safer, more reliable space travel.
While this is not the first time AI and supercomputing have been applied to space flight, the ART system outperformed existing machine learning-based algorithms in Earth-based tests.
The system is different from other methods in that it involves on-board computers, where the current architecture involves data being sent back and forth between computers on the ground and the vehicle in space.
“For autonomy to work without fail billions of miles away in space, we have to do it in a way that on-board computers can handle,” Simone D’Amico, an associate professor of aeronautics and astronautics, said in a statement. “AI is helping us manage the complexity and delivering the accuracy needed to ensure mission safety, in a computationally efficient way.”
Expert insights at GenAI Productionize 2.0
"I'm blown away by the high quality and value of this event." - Ricardo B.
"Great event - worth getting up at 4am in the morning for!" - Sandy A.
"What an amazing and insightful summit!" - Madhumita R.
"I loved the presentations and was truly captivated by the depth of experience and insight shared on these panels!" - Peter K.
"Great event! Looking forward to the next one." - Rozita A.
"Spectacular and very insightful summit! Very well done!" - Chad B.
"This has been such an amazing event." - Rob B.
Don't miss GenAI Productionize 2.0 – the premier conference for GenAI application development featuring AI experts from leading brands, startups, and research labs!
Adobe’s latest attempt at artist protection
Source: Adobe
The rise of generative AI has, in many respects, polluted our collective information ecosystems, calling into question the genuine reality of just about everything that exists — in any form — in the digital world.
This challenge, coupled with the vitality of the digital world to the very functionality of modern society, led Adobe to launch its Content Authenticity Initiative in 2019, which essentially acts as a digital content ‘nutrition label,’ making clear the sources — and authenticity — of online content.
What happened: Adobe on Tuesday unveiled a new web app in conjunction with the initiative that “offers a simple, free and easy way for creators to apply Content Credentials to their work, helping to protect content from unauthorized use and ensure creators receive attribution.”
The credentials allocated by this app will enable creators to set GenAI training preferences, which will be honored by Adobe and Spawning, an opt-out aggregator.
A beta version of the app will become available in the early part of next year.
While it helps, it doesn’t address the root issues of scraping without consent that have already informed the creation of generative AI models.
GPU cloud platform Tensor Wave raised $43 million in SAFE funding.
GPU platform VESSL AI raised $12 million in Series A funding.
OpenAI leaders say Microsoft isn’t moving fast enough to supply its servers (The Information).
Hurricane Milton could cause as much as $175 billion in damage, according to early estimates (CNBC).
Google must crack open Android for third-party stores, rules Epic judge (The Verge).
Philippine chipmakers are embracing automation — and leaving low-skilled workers behind (Rest of World).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
Senior Product Manager, AI and ML: NetApp, Cranberry Township, PA
Director, AI Products: SimSpace, Remote
Study: Preparing for AI agents
Source: Created with AI by The Deep View
Apropos of our story yesterday about Nvidia’s push deeper and deeper into the murky field of “agentic” AI, a coalition of researchers — including former OpenAI board member Helen Toner — recently published a report dealing explicitly with the topic, specifically, how we should be preparing for the rise of the AI agent.
The details: The report defines agentic AI as systems that “pursue more complex goals in more complex environments, exhibiting independent planning and adaptation to directly take actions in virtual or real-world environments.”
As with normal generative AI, the report acknowledged that the idea of AI agents presents both risk and opportunity.
The opportunity centers around an increase in productivity and efficiency, for both individuals and corporations. The risks are manifold, encompassing accidents that result from hallucinating models operating with limited human oversight, misuse by bad actors, data governance and privacy, allocation of responsibility following instances of algorithmic harm, impacts on labor and skill fade, resulting in dependency and vulnerability among the human populace. (The last is the major one in my mind).
The report highlighted the importance of guardrails, both legal and technical, in mitigating these risks as AI agents become both more powerful and more ubiquitous. At the core of this is transparency, explainability, interruptability and reversability, things that will likely require legal action to become widespread.
The Nobel Prize and the mainstreaming of AI’s X-Risk
Source: Nilas Elmehed/The Royal Swedish Academy of Sciences
On Tuesday, the Nobel Prize in Physics was jointly awarded to Princeton University Professor John Hopfield and University of Toronto Professor Geoffrey Hinton, two computer scientists whose work helped lay the foundation for present-day generative artificial intelligence.
The two men will split the $1.03 million prize associated with the award.
Hopfield was recognized for his 1982 creation of a memory network he called “the Hopfield network,” which worked to reconstruct images from imperfect data. Twenty years later, Hinton used the Hopfield network as the foundation for a type of neural network he called the "Boltzmann machine,” which was able to perform pattern recognition, a core feature of today’s models.
Note: Neural networks refer to a machine learning algorithm or model designed to (loosely) mimic human cognition. Each network consists of layers of nodes, or artificial neurons.
These neural networks — heavily reliant on massive training data sets — are at the heart of the deep learning models that have become so popular today.
The X-Risk of it all: Both scientists have expressed concern about the potentially damaging impact their inventions could have on society. Hopfield signed the Future of Life Institute’s 2023 “Pause AI” letter, which argued for a six-month moratorium on the release of powerful AI models.
And Hinton, of course, left Google last year to speak out about the dangers of AI, specifically raising the alarm over the potential for AI models to become smarter-than-human before breaking out of our control.
At a press conference Tuesday, Hinton said that AI will “be comparable with the Industrial Revolution,” adding: “We have no experience of what it’s like to have something smarter than us … We have to worry about a number of possible bad consequences, particularly, the threat of these things getting out of control.”
OK, let’s get into it. This idea of AI becoming take-over-the-world intelligent is, first and foremost, a scientifically dubious, hypothetical scenario. And it has become a source of heated debate within the AI community that went more mainstream following the release of ChatGPT.
This idea is predicated on an assumption that human-like (or better) cognition will become computable; that achieving such cognition will lead to genuine sentience; and that such a combination will result in a new ‘creature’ or ‘species’ that can then act of its own, potentially violent, volition.
Hinton believes that present-day language models already possess an understanding of the world. But this is not grounded in science.
In fact, there is a rather lengthy list of known obstacles here that plant this hypothetical abstraction firmly in the science-fiction camp. Before I get into them, however, it is important to note two things: the biggest concern shared by researchers pushing against so-called existential risk is that its existential nature will overshadow the current harms — such as algorithmic bias, environmental damage, worker exploitation, data privacy, fraud, deepfake harassment, etc. — being exhibited by these models, in turn shaping a damaging misunderstanding shared by the public and regulators alike.
The other thing to note is that current models, complete with unsolvable biases and hallucinations, could cause plenty of catastrophic damage by being allowed to operate in risky environments without human supervision. In other words, we don’t need superintelligence to cause harm.
Current AI refers in large part to large language models, which are statistical probabilistic next-word prediction engines. These models can interpolate, meaning they can output predictions based on their training data, but they struggle mightily with extrapolation, the application of existing knowledge to never-before-encountered scenarios. This is a hard thing to study since the companies that be won’t share details on their training data. But what the models cannot do is understand their output.
Recent neuroscience research confirmed that language and thought — a key component of cognition in human intelligence — are not the same thing; researchers found that “your language system is basically silent when you do all sorts of thinking,” casting doubt on the idea that to output language alone is indicative of intellect.
Part of this involves massive unknowns. We understand little about human cognition, human intelligence, or human consciousness; we don’t know how they relate to each other or how the organic construction of the brain powers these seemingly unique human elements.
These knowledge gaps make the jump between next-word prediction algorithms to thinking, feeling, self-acting ‘creatures’ large and murky enough that some scientists don’t think an artificial general intelligence will ever be possible (or that it will just take a really, really long time).
“Such language that inflates the capabilities of automated systems and anthropomorphizes them … deceives people into thinking that there is a sentient being behind the synthetic media,” a team of researchers including Timnit Gebru, Emily Bender and Margaret Mitchell said in response to the Pause AI letter. “This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency. Accountability properly lies not with the artifacts but with their builders.”
Meta’s chief scientist Yann LeCun — who received the Turing Award alongside Hinton in 2018 — has called the idea of existential risk “preposterously ridiculous.”
Signal President Meredith Whitaker told MIT Tech Review that “there’s no more evidence now than there was in 1950 that AI is going to pose these existential risks … I think we need to recognize that what is being described, given that it has no basis in evidence, is much closer to an article of faith, a sort of religious fervor, than it is to scientific discourse.”
It is a view shared by Suresh Venkatasubramanian, a former White House tech advisor who told me last year that X-Risk is a “great degree of religious fervor masked as rational thinking … there’s no science in X-Risk.”
A team of cognitive computational scientists said in a recent paper that the eventual creation of human-level AGI is “intractable,” with one of the researchers adding that even in a perfect scenario, where cognition is computable, “there will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close.”
The simple fact of the matter is that if we properly governed the AI we have today, ensuring accountability, transparency and explainability, enshrining consumers’ rights in data privacy and surveillance and preventing worker and environmental exploitation, while also preventing the introduction of these models in decision-making scenarios without human oversight and clear, easy ability for intervention, we would be ready for a scenario in which companies developed more powerful AI models.
The greatest danger lies in losing sight of the current environment — and the socio-political strings that move it — and the scientific realities of the algorithms that have been illusorily dubbed “AI.”
Which image is real? |
🤔 Your thought process:
Selected Image 1 (Left):
“The faux shallow depth of field on the other image is a giveaway for me personally, and this image appears to be taken in daylight with a closed-down aperture with realistic grain structure from using a higher ISO setting.”
Selected Image 2 (Right):
“Actually, quite a beautiful photograph. The coloring in picture #1 just appears more natural than the AI #2.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on linguistic diversity in LLMs:
Half of you think linguistic diversity is important in LLMs. 23% disagree, and the rest aren’t really sure.
Absolutely:
“With diverse societies being more common then not, why wouldn’t linguistic diversity be important?”
Something else:
“The larger the data set the better the answers, particularly in research models thereby requiring linguistic diversity. Merging any of these data will be the challenge.”
How do you feel about AI agents? |