⚙️ OpenAI's bargaining chip

Good morning. Penguin Random House is taking a stronger stance to protect its books from being turned into AI training fodder.

According to a report from The Bookseller, the publishing house is changing its copyright wording to read: "No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.”

How about that.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

MBZUAI Research: LLMs and fake news detection

Source: Created with AI by The Deep View

The plague of misleading or otherwise fake news far predates Large Language Models (LLMs) and generative AI. But the advent of generative AI makes detecting instances of fake news a little bit more complicated. 

Past research has framed the problem around automation; generally, automated news was fake news. Generative AI allows for a more complex environment, with humans penning genuine and fake news, and humans using machines to generate both genuine and fake news. 

The focus, according to researchers at the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), should instead be on determining authenticity rather than determining whether a given article was human or machine generated. 

The details: The researchers studied the efficacy of AI-based fake news detectors trained on varying ratios of human-written and machine-generated articles.

  • They found that, when trained only on human-written articles, fake news detectors can detect machine-generated fake news. But detectors trained only on machine-generated articles cannot detect human-written fake news. 

  • The researchers said that to train a fake news detector with the highest efficacy, there should be a lower proportion of machine-generated articles in the training set. 

Why it matters: While good at detecting machine-generated fake news, these fake news detectors struggle to detect human-written fake news articles, underscoring an inherent bias (and weak reliability) to such systems. 

To learn more about MBZUAI’s research visit their website, and if you’re interested in graduate study, please see their study webpage.

  • Former OpenAI technology chief Mira Murati to raise capital for new AI startup, sources say (Reuters).

  • How Google is changing to compete with ChatGPT (The Verge).

  • Elon Musk offers $1 million a day to entice swing state voters to sign petition (CNBC).

  • Inside a Chinese battery giant’s plan to become ‘more European’ (Semafor).

  • Employees describe an environment of panic and fear inside WordPress chaos (404 Media).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

Tesla’s FSD is under investigation again

Source: Unsplash

Last week, federal automotive investigations opened yet another investigation into Tesla’s full-self driving (FSD) technology. This marks the 14th investigation by the National Highway Traffic Safety Administration (NHTSA) into Tesla’s vehicles, according to Ars Technica, and is one of several ongoing invesetigations. 

This specific investigation was prompted by a series of four crashes — one of which culminated in the death of a pedestrian in 2023 — that occured while the drivers had FSD engaged. 

  • Each of the accidents involved areas of suddenly reduced visibility, resulting from “sun glare, fog or airborne dust.”

  • This preliminary evaluation of around 2.4 million FSD-equipped Teslas aims to assess the ability of “FSD’s engineering controls to detect and respond appropriately to reduced roadway visibility conditions.”

It also aims to probe whether similar FSD crashes have occurred in similarly reduced-visibility conditions, and whether Tesla has shipped any updates to the system that could affect its performance in such conditions. 

The context: Robotaxis like Waymo come equipped with a long list of sensor arrays, including lidar and radar in addition to computer vision sensors. But Tesla’s FSD is powered exclusively by its computer vision sensors, a significant limitation to its performance since there is no redundancy layer. 

  • As such, the term “FSD” is a bit of a misnomer; engaging the system requires the hands-on, eyes-on attention of the driver. 

  • This investigation shortly follows Elon Musk’s robotaxi event, during which Musk said that fully-autonomous Teslas would hit the streets of California and Texas next year, despite lacking regulatory approval and despite the technical hurdles tied to the vehicles. 

This investigation also feels particularly interesting in the context of a study we reported on several months ago, which found that self-driving cars are far more likely than humans to get in accidents at dawn, dusk and during turns … neural networks have severe limitations; those are on display here. 

OpenAI’s bargaining chip 

Source: Created with AI by The Deep View

A couple of relatively significant stories broke late last week concerning the — seemingly tenuous — partnership between OpenAI and Microsoft. 

The background: OpenAI first turned to Microsoft back in 2019, after the startup lost access to Elon Musk’s billions. Microsoft — which has now sunk more than $13 billion into the ChatGPT-maker — has developed a partnership with OpenAI, where Microsoft provides the compute (and the money) and OpenAI gives Microsoft access to its generative technology. OpenAI’s tech, for instance, powers Microsoft’s Copilot. 

According to the New York Times, OpenAI CEO Sam Altman last year asked Microsoft for more cash. But Microsoft, concerned about the highly publicized boardroom drama that was rocking the startup, declined. 

  • OpenAI recently raised $6.6 billion at a $157 billion valuation. The firm expects to lose around $5 billion this year, and it expects its expenses to skyrocket over the next few years before finally turning a profit in 2029.

  • According to the Times, tensions have been steadily mounting between the two companies over issues of compute and tech-sharing; at the same time, OpenAI, focused on securing more computing power and reducing its enormous expense sheet, has been working for the past year to renegotiate the terms of its partnership with the tech giant. 

Microsoft, meanwhile, has been expanding its portfolio of AI startups, recently bringing the bulk of the Inflection team on board in a $650 million deal. 

Now, the terms of OpenAI’s latest funding round were somewhat unusual. The investment was predicated on an assurance that OpenAI would transition into a fully for-profit corporation. If the company has not done so within two years, investors can ask for their money back. 

According to the Wall Street Journal, an element of the ongoing negotiation between OpenAI and Microsoft has to do with this restructuring, specifically, how Microsoft’s $14 billion investment will transfer into equity in the soon-to-be for-profit company. 

  • According to the Journal, both firms have hired investment banks to help advise them on the negotiations; Microsoft is working with Morgan Stanley and OpenAI is working with Goldman Sachs. 

  • Amid a number of wrinkles — the fact the OpenAI’s non-profit board will still hold equity in the new corporation; the fact that Altman will be granted equity; the risks of anti-trust scrutiny, depending on the amount of equity Microsoft receives — there is another main factor that the two parties are trying to figure out: what governance rights either company will have once the dust settles. 

Here’s where things get really interesting: OpenAI isn’t a normal company. It’s mission is to build a hypothetical artificial general intelligence, a theoretical technology that is pointedly lacking in any sort of universal definition. The general idea here is that it would possess, at least, human-adjacent cognitive capabilities; some researchers don’t think it’ll ever be possible. 

There’s a clause in OpenAI’s contract with Microsoft that if OpenAI achieves AGI, Microsoft gets cut off. OpenAI’s “board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.”

To quote from the Times: “the clause was meant to ensure that a company like Microsoft did not misuse this machine of the future, but today, OpenAI executives see it as a path to a better contract, according to a person familiar with the company’s negotiations.”

This is a good example of why the context behind definitions matters so much when discussing anything in this field. There is a definitional problem throughout the field of AI. Many researchers dislike the term “AI” itself; it’s a misnomer — we don’t have an actual artificial intelligence. 

The term “intelligence,” is itself vague and open to the interpretation of the developer in question. 

And the term “AGI” is as formless as it gets. Unlike physics, for example, where gravity is a known, hard, agreed-upon concept, AGI is theoretical, hypothetical science; further, it is a theory that is bounded by resource limitations and massive limitations in understanding around human cognition, sentience, consciousness and intelligence, and how these all fit together physically. 

This doesn’t erase the fact that the labs are trying hard to get there. 

But what this environment could allow for is a misplaced, contextually unstable definition of AGI that OpenAI pens as a ticket either out from under Microsoft’s thumb, or as a means of negotiating the contract of Sam Altman’s dreams. 

In other words, OpenAI saying it has achieved AGI, doesn’t mean that it has. 

As Thomas G. Dietterich, Distinguished Professor Emeritus at Oregon State University said: “I always suspected that the road to achieve AGI was through redefining it.”

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 2 (Left):

  • “This one was a little easier than others have been. The girl in Image 1 looked ‘synthetic’; a few of the apples appeared to be ‘cut off’; and the shadowing in Image 2 was incredibly prominent (I haven't seen an AI pic able to create that ... yet).”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on humanoid robots:

You guys are pretty excited by the idea of humanoid robots; around 40% of you said you would definitely buy one. 18% said they wouldn’t spend more than $5,000 on one.

Only a total of 34% wouldn’t buy one, with some saying you would have no use for one.

No:

  • “I don’t have a practical use for one at this stage of my life, but that doesn’t mean that one wouldn’t be useful for the right situation.”

Something else:

  • “As a full-time business owner and mom, it would be great to have an assistant to help me do more and not have to do it ALL. From simple tasks like assisting me when cooking to assembling boxes for client gifts, the one thing I don't think I could get over is a human-sized machine in my home. We got used to animals living in our house without worrying they'll attack us, but a robot machine with uber intelligence that can (walk) on its own kind of freaks me out.”

How do you deal with the possibility of fake news?

Login or Subscribe to participate in polls.