- The Deep View
- Posts
- Biden divvies up the world in latest round of tech restrictions
Biden divvies up the world in latest round of tech restrictions
Good morning. Monday was a somewhat big day on the regulatory front, with Biden finalizing a rule regarding the global exportation of AI chips just one week before he’s set to leave office.
And OpenAI finalized its own AI policy recommendations at the same time. We get into all of it below.
(Submit your Real photos here).
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
⚕️AI for Good: Canines and cancer screenings
🚘 Mercedes eyes AI tech, partners with Google
🏛️ OpenAI wants a very light-touch regulatory approach
🌎 Biden divvies up the world in latest round of tech restrictions
AI for Good: Canines and cancer screenings
Source: SpotitEarly
Generative artificial intelligence is often talked about as a dual-use technology; on the one hand, it poses a number of risks running the gamut from cybersecurity to sustainability. On the other, it could transform healthcare.
And while AI is certainly not curing cancer any more than it is solving climate change, generative tools are being leveraged in a number of ways to help doctors out.
What happened: A recent study, published in Nature’s Scientific Reports, seemingly validated a new approach to cancer screening that combines AI with dogs for early detection.
The method, developed by SpotitEarly, relies on three components; one, the existence of specific molecular profiles of cancer in breath samples; two, the ability of trained dogs to detect those profiles; and three, AI to analyze the results of the dogs’ sniff tests.
In a trial that included roughly 1,400 participants, the dog-approved screening process was capable of successfully identifying four different types of cancer 94% of the time.
The idea is to scale an accessible, accurate screening method, enabling more people to easily gain access to early-detection tools, something that can be life-saving when it comes to cancer treatment. SpotitEarly is currently pursuing regulatory approval in order to bring its test to market.
The world of AI is evolving at lightning speed, and the only way to stay relevant is to MASTER AI before it masters you.
Join the World’s first 2-Day Mastermind Challenge to learn the Tools, Tactics, and Strategies to Automate Your Work Like Never Before!
Best part? It is usually for $395, but the first 100 of you get in for FREE! YES!
Inside the AI Mastermind, you will learn:
👉 Foundations of Generative AI, Neural Networks, LLMs & master advanced prompting
👉 How to use diffusion-based AI models and their real world application
👉 Image and Video creation & create a Jaguar ad from scratch solely using AI tools
👉 Building CustomGPTs & AI Agentic flows
👉 Setting up extensive automations for your daily tasks using Make
By the way, you will not just learn, but also implement and build things in break-out rooms with fellow attendees! 🚀
Mercedes eyes AI tech, partners with Google
Source: Mercedes-Benz
Mercedes is bringing Google’s “Automotive AI Agent” to its vehicles in the form of a virtual, AI-powered assistant. The first car with the new assistant will be the Mercedes CLA.
The details: As an expansion of the partnership between the two companies, Mercedes said Monday that it will leverage Google Cloud’s AI tech to enhance its own MBUX virtual assistant, an effort to provide natural language answers in the flow of natural conversation with users.
The idea is specifically as an evolution of Google Maps, wherein drivers can ask the AI-powered virtual assistant to, for instance, “guide me to the nearest fine dining restaurant for a unique culinary experience.”
Google said in a statement that users can also ask follow-up questions, such as the reviews the restaurant has achieved, or the chef’s signature dish. The assistant on offer here was built using Google’s Gemini AI.
The assistant will retain memory of these conversations, allowing them to be picked up later.
But, neither company made any mention of the issues of hallucination and bias that are known to plague generative AI systems, so it’s unclear what mitigation procedures may or may not be in place. It additionally remains unclear how user data will be collected, stored, safeguarded or used — either sold to data brokers or for training purposes — as well as if the assistant will collect audio of the user (and any non-consenting passengers) even when not in use.
Google didn’t respond to a request for comment regarding the above points.
I sat down with Dr. Nada Sanders, author of The Humachine, to break down the pandemic-fueled rise of AI, and a future of increasing man x machine interaction and integration. Subscribe and check it out here.
British Prime Minister plans to 'unleash AI' across UK to boost growth (BBC).
There’s a popular tech stock washout Monday as Palantir, Nvidia and Rigetti Computing drop (CNBC).
Supreme Court allows Hawaii climate change lawsuit to move forward (NBC News).
Nvidia’s top customers face delays from glitchy chip racks (The Information).
Global fact-checkers were disappointed, not surprised, Meta ended its program (Rest of World).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
OpenAI wants a very light-touch regulatory approach
Source: OpenAI
OpenAI doesn’t want a lot. Just strategically light federal regulation, government data, national security information and resources, voluntary oversight and nuclear reactors.
The “economic opportunity AI presents is too compelling to forfeit,” after all.
What happened: The startup on Monday published its AI-related policy proposals, coming a week before President-elect Donald Trump is set to take office. Trump is likely to usher in an era of hands-off corporate regulation.
The brief, 15-page document attempts to thread a needle that OpenAI has been holding for nearly two years, now — AI is scary, but it could also usher in a fantastical utopia, so long as we get the regulation right (and by right, they mean, don’t “stifle” innovation).
The core of the document revolves around an attempt to center impressions of the role of regulation; cars, OpenAI says, were invented in Europe. But, due to over-regulation in Europe, America became the heart of the world’s auto industry. To OpenAI, AI is pretty much the same as those cars.
OpenAI wants the freedom for users and developers to design and deploy AI tech “as they see fit, in exchange for following clear, common-sense standards that help keep AI safe for everyone, and being held accountable when they don’t.” Those common-sense standards, however, seem to serve a bit of an ulterior motive, specifically “by preempting a state-by-state tangle of roads and rules.”
As part of this proposal, OpenAI wants the U.S. to “share national security-related information and resources that it alone maintains” with AI developers, incentivize the broad deployment of AI technologies, help companies access secure infrastructure and create a “voluntary” pathway for companies to work with the government on model evaluations and oversight.
Users, OpenAI said, should be free to use AI tools however they want. And “in exchange for having so much freedom, users should be responsible for impacts of how they work and create with AI,” something that would seem to insulate developers from liability. OpenAI also proposed the creation of more AI infrastructure, including nuclear reactors, and the digitization of analog government data for the purposes of training models.
This is in line with my own predictions from the other week; that tech companies will lobby for a very, very light slate of advantageous federal regulations to get around the patchwork state-by-state approach.
This comes on the same day that the White House unveiled new AI chip restrictions.
Biden divvies up the world in latest round of tech restrictions
Source: Created with AI by The Deep View
“Artificial intelligence is quickly becoming central to both security and economic strength,” President Joe Biden’s administration said Monday, in a preamble to its sweeping new rules regarding the global diffusion of the chip hardware required to run generative AI.
“The United States must act decisively to lead this transition by ensuring that U.S. technology undergirds global AI use and that adversaries cannot easily abuse advanced AI,” the administration wrote, adding that in the wrong hands, AI systems can “exacerbate significant national security risks.”
Biden’s latest dive into global chip control follows a years-long tech export restriction battle with China, which recently culminated in the export ban (for both countries) of several key semiconductor materials. It also comes just days before Donald Trump will become inaugurated as the 47th President of the U.S. — as such, the rules will take effect 120 days from their publication, according to Reuters, giving the next administration time to weigh in.
What are we dealing with, here: The idea running through the core of this Interim Final Rule on Artificial Intelligence Diffusion is one of balance between hedging against national security concerns while striving to ensure American dominance in tech and AI.
According to the rule, it is critical “that the world’s AI runs on American rails.”
The rule divides the world into three tiers. 18 “Tier One” countries — key allies including Britain and Japan — will have no restrictions on the number of AI-enabling GPU chips they can purchase, or the size of the data centers they’d like to build. Companies headquartered in Tier One countries can gain the authorization to build data centers of up to 7% of their total compute in Tier Two countries, contingent upon meeting security and trust standards.
Companies located in Tier Two countries — encompassing roughly 120 countries including Israel and the United Arab Emirates — will be capped at 50,000 GPUs over the next two years. However, if companies in those countries meet high security standards, they can apply to become a National Verified End User, which will boost their purchase cap to 320,000 GPUs over the next two years.
A number of caveats apply here. For instance, if governments are a part of the U.S.’s “international ecosystem of shared values” regarding the deployment of AI, they can double their chip caps. Additionally, any order less than 1,700 chips will not require a license and will not count against national chip caps, due to “clearly innocuous purposes.”
But Tier Three countries — such as Russia, China and other “countries of concern” — remain barred from receiving chips.
The rule additionally restricts the transfer of model weights (parameters) to non-trusted actors and establishes security standards to protect advanced models.
The reaction: In a statement, Nvidia — conspicuously, the world’s dominant chipmaker — called the rules a “sweeping overreach” that will “squander America’s hard-won technological advantage.”
Lennart Heim, of the Rand Corporation, said that the key principle of the framework is simple: “build your AI infrastructure in the U.S. and partnered nations.” He added that “critics are right, the balance here is crucial.” If the controls are too restrictive, they might give an advantage to the only real alternative: Chinese chips, the competitiveness of which Heim called “debatable.”
Janet Egan, a senior fellow at the Center for a New American Security, said the framework represents “serious unilateral action from the U.S. to shape the global AI ecosystem to align with its strategic interests.” She added that, though the rule will raise concerns from the global community, it would also “streamline processes that were likely already happening case-by-case behind the scenes.”
It’s very much unclear how Trump will approach this; looking only at the U.S. vs China element of the rule, it seems relatively in line with Trump’s expected approach to China (tariffs of up to 60%, for instance).
The markets bled a bit on Monday, with Nvidia falling by roughly 2.5%, TSMC by more than 3% and Microsoft by more than 1%.
Which image is real? |
🤔 Your thought process:
Selected Image 1 (Left):
“At first image 1 seemed fake with the single huge white-capped mountain (Mt. Fuji?) but a closer inspection of image 2 revealed that the shadow was wrong, especially around the biker's head and torso.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on Meta’s copyright lawsuit:
A third of you think the latest unredacted documents will swing the case against Meta; 15% think it won’t. The rest aren’t sure.
What do you think about OpenAI's proposals? |