• The Deep View
  • Posts
  • ⚙️ Gary Marcus on fighting for a positive AI future

⚙️ Gary Marcus on fighting for a positive AI future

Good Morning. Welcome to this first weekend longread edition of The Deep View.

I’ve been meaning to do this for a while.

Dr. Gary Marcus’ new book on artificial intelligence — “Taming Silicon Valley” — is less about science and more about power, and it doesn’t pull its punches. 

In the opening pages, Marcus, a cognitive scientist and AI researcher, gets into the meat of the ethical problems posed by AI and the companies that are assembling the tech. If nothing is done, he says, if Silicon Valley is not tamed, “the imbalances of power, where unelected tech leaders control an enormous swath of our lives, will grow. Fair elections may become a thing of the past. Automated disinformation may destroy what remains of democracy.”

“In the worst case,” he writes, “unreliable and unsafe AI could lead to mass catastrophes, ranging from chaos in electrical grids to accidental war or fleets of robots run amok. Many could lose jobs. Generative AI’s business models ignore copyright law, democracy, consumer safety, and impact on climate change. And because it has spread so fast, with so little oversight, Generative AI has, in effect, become a vast, uncontrolled experiment on our whole population.”

The central pillar of the book is Marcus’ roadmap to achieving AI that works for the people, rather than against them. This road, he said, is paved by data and privacy rights, transparency requirements, liability and duties of care for developers, AI literacy for the masses, independent oversight that includes agile and international governance that incentivizes the creation and deployment of “good AI” and a more pointed exploration of new paradigms to achieve AI that we can actually trust. 

On display is the at-odds relationship between the science of AI and the business of the industry, which has been built on layers of unfounded hype. 

Current AI — referring to generative systems like ChatGPT — is heavily trailed by ethical concerns spanning cybercriminality, sustainability, copyright infringement, enhanced surveillance, the mass spread of misinformation, bias, fairness, lack of explainability, job loss and algorithmic discrimination. 

But the architecture poses an additional risk, one of overreliance and misunderstanding; since chatbots generate language-based output, they seem to give the impression of intelligence or of understanding. As far as the science is concerned, however, such models are likely nothing more than statistical probabilistic generators at an enormous scale, a reality that makes the very term ‘AI’ misleading. 

An addition to this is the enormous and little-understand complexities of the human mind, cognition and consciousness.

Despite all the ethical and technical concerns surrounding the development and deployment of AI, the companies behind the models remain opaque and unregulated, a reality that opens up the opportunity for widespread harm (which has been occurring for years). 

The challenge we’re faced with, according to Marcus, is one that we must overcome, since the potential benefits of AI are, he says, significant. To achieve those benefits, we need smart regulation and smart science, and above all, an active civilian population, one that fights for its rights rather than allowing Big Tech and Silicon Valley to steamroll over everyone to achieve steadily larger market valuations. 

I met up with Marcus to talk about the book and the many complexities of AI (you can watch the whole thing below). 

The case for regulation 

The recurring talking point that Big Tech players have pushed in response to regulatory efforts is that regulation would “stifle” innovation.

Marcus called the talking point a “lie.” 

“Sometimes regulation actually fosters innovation,” he said, referring to the introduction of seatbelts in cars, as well as the aviation and pharmaceutical industries. “There's a certain part of Silicon Valley that wants you to believe that lie, that regulation and innovation are in diametric opposition. But it's not true. It's not like because we have an FDA, all big pharma just drops out and gives up. It's ludicrous.”

According to Marcus, what we need right now is greater innovation around safe and responsible AI. Regulation, he said, can help achieve that kind of innovation. Suresh Venkatasubramanian, an AI researcher and former White House tech advisor, has similarly said that regulation will trigger “market creation; we're beginning to see companies form that offer responsible AI as a service, auditing as a service. The laws and regulations will create a demand for this kind of work."

And as this debate continues — wrapped up in massive lobbying efforts — AI-related legislation is moving slowly in the U.S., with federal policy largely nonexistent and state regulatory efforts both varied and mismatched. 

Even with some early legislation in Europe, we remain far away from a unified governance approach, and even further from the kind of international governance agency that Marcus has so often pushed for. 

The original mission

Artificial intelligence, as a field, Marcus said, has largely lost its way. 

In the beginning, it was research-oriented, and that research was intended to help people. 

It’s a point that Igor Jablokov — an AI researcher who helped develop early precursors to Amazon’s Alexa and IBM’s Watson — similarly expressed to me in an interview in June. He said that the early researchers were attracted to AI out of a desire to make strides in human safety technology (such as self-driving), accessibility for disabled people and culturally diverse language translation. 

“It wasn't a profit motive,” Jablokov said. “There were good, altruistic reasons why many of us were attracted. And remember, there was no money. It wasn't like people were throwing $10 million a year at us back then.” 

Now, the industry is attracting billions of dollars from megacap tech corporations and venture capitalists alike. Efforts at AI development and deployment have rewarded the stocks of publicly traded tech companies with hundreds of billions of dollars in value. OpenAI — in talks to raise funding at a $150 billion valuation — charges $20 per month for a premium subscription to ChatGPT. The Information has reported that the company has achieved more than $3 billion in annualized revenue. 

And the sector, focused on LLMs and generative AI chatbots, is trying very hard to generate the kind of returns that would justify the scope of its massive investment in the tech. 

Marcus called this transfiguration of the industry a story of “money and corruption and a story … of how AI went from good technical leaders who really understood the science to people that are better at marketing and hype and so forth.”

“And they have been rewarded for that marketing and hype,” he added. “The wrong things are being rewarded.” 

Part of that hype is in the very language worked into many generative AI products. 

OpenAI’s newest model — OpenAI o1 — for instance, is described, in both marketing material and within chats themselves, as being designed to “spend more time thinking” before it “responds.” This kind of language anthropomorphizes such models, hammering home misplaced human attributes that could give people the wrong impression about what, exactly, AI is and is capable of. 

A screenshot of o1 “thinking” (OpenAI).

Where do we go from here? 

With legislation moving slowly in the face of potentially severe risks and active harms, Marcus said that part of the reason he wrote the book is to inspire collective civic action. 

“I would like to see people get loud about this, talk to their congresspeople, write op-eds, maybe consider boycotts,” he said. “We should say, ‘look, as consumers … we want AI to be done in an ethical way, in a way that is good for everybody and not just a few people. We're going to stand by that.’”

Such an action, at a large enough scale, could help change the culture of Silicon Valley, according to Marcus. 

“Silicon Valley used to care more about the consumers, and nowadays has an attitude of like, ‘yeah, we're just gonna make money off those people,’” Marcus said. 

“We need to change that attitude.”