⚙️ Canadian media companies sue OpenAI

Good morning. Elon Musk can’t seem to get OpenAI out of his head. The world’s richest man — in an escalation of his legal conflicts with Microsoft and OpenAI — on Friday asked a federal court to prevent OpenAI’s pending transition to a fully for-profit enterprise.

Musk, of course, was a founding member (and funder) of OpenAI; his leaving precipitated the company’s relationship with Microsoft.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • ⚕️ AI for Good: Cancer gene tracking 

  • 💻 Amazon is almost ready to unveil a new generative video model 

  • 👁️‍🗨️ Australian inquiry accuses US Big Tech execs of being ‘pirates’

  • 🏛️ Canadian media coalition sues OpenAI

AI for Good: Cancer gene tracking 

Source: Stanford

Increasingly, doctors have been conducting genetic profile tests on cancer tumors; this offers a more precise diagnosis which can then be used to guide more specific treatment options — certain drugs and therapies will work (or won’t work) better based upon the genetic profile of each individual cancerous tumor. 

But the process at hand here involves genomic sequencing, which isn’t cheap. So, researchers have been turning to artificial intelligence-powered tools, which tend to excel at pattern-matching, for help. 

What happened: Stanford Medicine researchers recently developed an algorithmtrained on more than 7,000 diverse tumor samples — that can predict the genetic activity of a tumor based on a simple microscope image of a biopsy. 

  • The program, nicknamed SEQUOIA, displays its findings as a visual map, making it easy for clinicians to track genetic changes and variations. 

  • The team tested the model on breast cancer tumors, finding that SEQUOIA is capable of providing the same genetic risk score as MammaPrint, an FDA-approved genetic diagnostic test. 

“This kind of software could be used to quickly identify gene signatures in patients’ tumors, speeding up clinical decision-making and saving the health care system thousands of dollars,” Dr. Olivier Gevaert, the senior author of the paper, said in a statement

How MindStudio is Helping Teams Work Smarter with AI

MindStudio’s AI platform empowers everyone—developers and non-technical teams alike—to build smarter, faster with AI. Whether it’s customer support agents or fully autonomous workflows, AI Workers can be the game-changing upgrade your business needs.

Trusted by Fortune 500 companies, governments, and indie developers, MindStudio has over 100,000 users already building the future. Sign up today for free and get $5 to try any AI model!

Amazon is almost ready to unveil a new generative video model

Source: Amazon

Amazon has developed a new Large Language Model (LLM) that, importantly, is capable of processing images and videos in addition to text, according to a recent report from The Information

The model is code-named Olympus. Amazon did not return a request for comment (made outside of normal working hours) regarding he model.

The details: Three unnamed sources told The Information that the system could, for instance, help customers “search video archives for certain scenes … using simple text prompts.” 

  • Amazon may, according to the report, announce the model sometime this week, during its annual AWS re:Invent conference. 

  • Sources told The Information that the model is less advanced than offerings from OpenAI and Anthropic, though it represents a big improvement over Amazon’s first batch of LLMs, bundled under the umbrella code-name Titan and released last year. 

The context: This comes shortly after Amazon said it would double its investment in Anthropic to $8 billion. It could serve as an opportunity for Amazon to attempt to lessen its reliance on startups like Anthropic. 

🗓️ The Code AI Summit is on 12/12—join us to see how AI meets real-world codebases

Unlock the future of coding - Join us on 12/12 for the first ever Virtual Code AI Summit

Hear from engineering leaders, AI practitioners, and development teams about the real world progress being made with AI in complex enterprise codebases. Join for fantastic sessions, live Q&A, and to connect with hundreds of other dev leaders.

Attend live on December 12 → Register today

  • Elon Musk files for injunction to halt OpenAI’s transition to a for-profit (Tech Crunch).

  • US to introduce new restrictions on China’s cutting-edge chips (Wired).

  • Startup founder who sold AI chatbot to schools charged with fraud (New York Times).

  • In the new space race, hackers are hitching a ride into orbit (CNBC).

  • Taiwan leader stops in US following weapons sale announcement, drawing China’s ire (Semafor).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

  • The FTC has launched a broad antitrust investigation into Microsoft, Bloomberg reported, examining everything from its cloud business to its AI products. Wedbush analyst Dan Ives isn’t concerned or surprised, calling it a “final shot across the bow” with FTC head Lina Kahn not expected to remain at the FTC into the next administration.

  • Elon Musk followed through on his promise to give X (formerly Twitter) stakeholders shares of xAI, according to the FT. X shareholders were given 25% across xAI’s two funding rounds.

Australian inquiry accuses US Big Tech execs of being ‘pirates’

Source: Unsplash

An Australian senate committee recently released an extensive report, the result of nine months of hearings, that explored the pitfalls and promises of artificial intelligence. The report, which noted the unwillingness of U.S. Big Tech giants to provide fully transparent responses — specifically surrounding their use of user data — produced a series of 13 recommendations for Australian legislators that would, if pursued, establish a sweeping new legal framework for AI in the country. 

The details: The report recommends the introduction of a dedicated piece of AI legislation — similar to the European Union’s AI Act — that would first group AI offerings into “high risk” applications, and then regulate those applications through a rights-based, transparent approach. 

The report specifically calls for the inclusion of popular Large Language Models (LLMs) in the proposed “high-risk” regulatory category. 

  • The report also calls for funding to support environmentally sustainable AI infrastructure in the country, as well as the consideration of “an appropriate mechanism to ensure fair remuneration is paid to creators for commercial AI-generated outputs based on copyrighted material used to train AI systems.”

  • To that end — and of particular note — the report specifically calls out the “unprecedented theft of (creators’) work by multinational tech companies,” something the Australian Society of Authors was quite pleased with. 

According to the report, a “refusal to directly answer questions was an ongoing theme in the responses received from Amazon, Meta and Google.” 

“Watching Amazon, Meta and Google dodge questions during the hearings was like sitting through a cheap magic trick ... plenty of hand-waving, a puff of smoke, and nothing to show for it in the end,” the committee’s chair, Labor Senator Tony Sheldon, said in a statement. “These tech giants aren’t pioneers; they’re pirates — pillaging our culture, data and creativity for their gain while leaving Australians empty-handed.”

The context: It’s been two years since the release of ChatGPT set of a global race to, first, get people on board with generative AI, and second, to (maybe?) think about regulating it. To that end, the European Union’s risk-based approach, as well as China’s regulatory framework, represent some of the only pieces of global AI legislation. 

The U.S., at the federal level, remains slow to the point of nonexistence when it comes to regulation, a reality that comes at a time when — due to the impending Presidential administration — a deregulatory environment for tech and AI is now expected. 

Canadian media coalition sues OpenAI

Source: Created with AI by The Deep View

A coalition of five of Canada’s largest news media organizations, which collectively own and operate dozens of news websites, has filed a massive copyright infringement lawsuit against OpenAI, claiming several violations of Canada’s Copyright Act, as well as unjust enrichment. 

OpenAI did not respond to a request for comment (made after normal working hours). 

Canadian copyright law: Before we get into it, the venue here marks a significant difference to other U.S.-based copyright infringement lawsuits. While OpenAI has argued that the training of its commercialized generative AI products on scraped copyrighted works constitutes “fair use,” a doctrine of U.S. copyright law (and an argument that many scholars have said shouldn’t hold up), Canada doesn’t have “fair use.” 

  • The Canadian equivalent — “fair dealing” — allows for the use of copyrighted works in eight specific situations: research, private study, education, parody, satire, criticism, review and news reporting. 

  • There are a further seven factors that a given person or entity must prove to determine fairness, including the purpose of the use and its impact. 

Beyond that fair dealing section, the law itself — which you can read in full here — makes very clear that it is copyright infringement for anyone to sell, rent or distribute a copy of a work without the knowledge and consent of its author. 

Canadian copyright law also includes a far more expansive set of moral rights than U.S. copyright law, including the right of association; an author has the right to prevent the use of their work "in association with a product, service, cause or institution."

The details of the case: In addition to simple copyright infringement, the lawsuit accuses OpenAI of breaching the terms of use of the organizations’ websites, circumventing measures the organizations had put in place to prevent illegal scraping and unjust enrichment. 

  • The media organizations are seeking several rounds of damages in addition to a permanent injunction preventing OpenAI from using their copyrighted works. 

  • The lawsuit proposed statutory damages of $20,000 per infringed work; according to the suit, OpenAI, across all five organizations, infringed on a minimum of 15.1 million works (that’s $300 billion alone, roughly double OpenAI’s valuation of $157 billion and 100 times its annual revenue of $3.4 billion). 

The crux of the case is laid out right in the opening pages of the suit: “to obtain the significant quantities of text data needed to develop their GPT models, OpenAI deliberately “scrapes” (i.e., accesses and copies) content from the News Media Companies’ websites … it then uses that proprietary content to develop its GPT models, without consent or authorization.”

Unlike a few of the recently-dismissed copyright infringement cases we’ve talked about, this case gets right to the heart of the generative AI copyright question — is it legal for tech companies to, without permission, knowledge or compensation, scrape the bulk of the internet and use that content to build highly commercialized products? 

The lawsuit here argues that, based on existing laws, the practice is blatantly illegal; it adds that OpenAI’s ever-increasing list of media licensing deals (which came well after the initial round of copyright infringing actions taken by the company) is evidence that OpenAI knows this practice is illegal, yet does it anyway: “at all times, OpenAI was and is well aware of its obligations to obtain a valid license to use the Works. It has already entered into licensing agreements with several content creators, including other news media organizations.”

This case feels significant to me, and somewhat similar in scope and potential impact to the New York Times suit and the Authors Guild suit against OpenAI. 

All three deal with the fundamental question of generative AI and copyright infringement; this case raises that question in a whole different venue, calling into question the legality and ongoing functionality of these companies in a country outside of the U.S. 

I would add that it feels significant that this case comes some two years after the launch of ChatGPT. It was a calculated move; it was not rushed. 

There is a technology here, in the form of genAI, that, in the words of Ed Newton-Rex, is “exploitative at its core.” I by no means have any predictions about the outcome of this or any other lawsuit — but the sheer cost of building and operating AI tech, combined with the scope of the copyright infringement, could put OpenAI and its peers in a very vulnerable position. 

A guilty verdict, and a requirement to pay even some of the proposed damages for any of these lawsuits, could very well cripple a company that has been reliant on regular infusions of Big Tech and venture capital — to the tune of billions of dollars — just to remain a going concern. 

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The flour in the fake was scattered in a way I have never seen IRL.”

Selected Image 2 (Right):

  • “The flour powdering looked more realistic on image 2 ...but it fooled me.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on Thanksgiving desserts:

A third of you are into pumpkin pie — the rest are pretty evenly split between pecan, apple, fruit, cobbler and the rest.

And I just gotta say, I made a mean pumpkin swirl cheesecake this year. The secret is just a little bit of goat cheese and a metric ton of cinnamon.

What do you think of the Canadian media lawsuit?

Login or Subscribe to participate in polls.