• The Deep View
  • Posts
  • ⚙️ Anthropic offers policy suggestions to US government

⚙️ Anthropic offers policy suggestions to US government

Good morning, and happy Friday. The stock market had a tremendously bad day Thursday, with TSLA ( ▲ 7.59% ) and NVDA ( ▲ 6.43% ) each falling nearly 6%.

The Nasdaq fell 2.6% and the S&P 500 SPX ( ▲ 0.49% ) fell 1.7%.

But it’s almost the weekend, so that’s good.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 🧠 AI for Good: Artificial touch

  • 👁️‍🗨️ Alibaba takes a leaf from DeepSeek’s book in new release

  • 🏛️ Anthropic offers policy suggestions to US government

AI for Good: Artificial touch 

Source: Johns Hopkins

Researchers at Johns Hopkins University have developed a prosthetic hand that, loaded up with sensors and sensing algorithms, is able to automatically adjust its grip force to grasp different kinds of objects. 

The details: The hand is made up of multiple, rubber-like polymer “fingers” built around a rigid, 3D-printed skeleton. Forearm muscles control the finger joints, which feature three layers of tactile sensors designed to mimic human skin. 

  • The data from those sensors is processed by machine learning algorithms that translate that sensor information “into the language of nerves … to create a realistic sense of touch,” according to Johns Hopkins. 

  • In lab experiments, the hand was able to handle a variety of objects, ranging from stuffed animals to metal water bottles and a plastic cup filled with water, adjusting its grip to do so without any mishaps. 

To turn the bio-inspired robotic hand into an advanced prosthetic that would return some sort of tactile sensation to an amputee would require three main components: external sensors, a system to translate that information into nerve signals and a means of stimulating the person’s nerves so they could feel the sensation being picked up by those sensors. 

“If you’re holding a cup of coffee, how do you know you’re about to drop it? Your palm and fingertips send signals to your brain that the cup is slipping,” Nitish Thakor, a biomedical engineering professor, said. “Our system is neurally inspired — it models the hand’s touch receptors to produce nervelike messages so the prosthetics’ ‘brain,’ or its computer, understands if something is hot or cold, soft or hard, or slipping from the grip.”

Let Fyxer Handle Emails & Meetings—You Focus on What Matters

Meet Fyxer, your AI Executive Assistant, saving you one hour daily. Start your day with organised emails, replies crafted in your tone, and clear meeting notes.

  • Email Organization: Fyxer prioritises important emails and filters spam.

  • Automated Email Drafting: Fyxer drafts replies so accurate, 63% are sent without edits.

  • Meeting Notes: Fyxer summarizes video calls with clear next steps.

Perfect for teams, Fyxer adapts to communication styles, boosting productivity.

Setup takes 30 seconds, with no credit card needed for a 7-day free trial.

Alibaba takes a leaf from DeepSeek’s book in new release

Source: Alibaba

A few months ago, ‘reasoning’ models were rare. Now, they’re little more than the latest trend, an approach that’s been replicated by OpenAI, DeepSeek, xAI, Anthropic, IBM and, now, Alibaba BABA ( ▼ 1.35% )

The news: The Chinese firm on Thursday launched QwQ-32B, a smaller ‘reasoning’ model resulting from Alibaba’s exploration of the reinforcement learning techniques that made DeepSeek’s R1 such a powerhouse. 

  • According to Alibaba, the new model performs at parity (based on benchmarks) with DeepSeek’s R1, a “remarkable” outcome considering the fact the model has only 32 billion parameters to R1’s 671 billion

  • Alibaba released the model and its weights on Hugging Face, making it slightly more open than models from the likes of OpenAI and Anthropic, though far short of achieving legitimate open-source status, which would require the release of training data and source code. 

What’s going on here is relatively simple; several years ago, researchers discovered that something called Chain-of-Though (CoT) prompting forces models to respond to queries in more logical steps, resulting in more robust output. And OpenAI more recently discovered that you can get models to perform CoT reasoning through reinforcement learning, in which models develop the desired behavior through the rewards-based reinforcement of trial-and-error. 

Rather than use traditional rewards, Alibaba said that it “utilized an accuracy verifier for math problems to ensure the correctness of final solutions and a code execution server to assess whether the generated codes successfully pass predefined test cases.”

After that first stage, Alibaba added another reinforcement stage that leveraged more traditional rewards, resulting in steady increases in benchmark performance. 

Alibaba noted the “immense potential of scaled RL” to realize the “untapped possibilities within pre-trained language models.”

The work has not been independently verified. 

Hong Kong-listed shares of the firm hit a new 52-week high on Thursday, surging around 8% while its New York-listed shares slipped by about 1%. 

  • The Turing Award: Professors Andrew G. Barto and Richard S. Sutton were jointly awarded the 2024 Turing Award for their pioneering work in the development of reinforcement learning, work that dates back to the ‘80s. For the non-initiated, the Turing Award is like the Nobel Prize, but for computing.

  • A working paper: A recent paper from the Organization for Economic Cooperation and Development (OECD) examines the rise of “algorithmic management” in workplaces around the world. The paper highlights problems with trustworthiness and unclear accountability related to these algorithms.

  • As Bangladesh’s factories turn to surveillance and automation, garment workers feel the pressure (Rest of World).

  • Global temperatures are surging - but is Dubai actually getting cooler? (The National).

  • S&P 500 hits lowest since early November on trade policy fatigue (CNBC).

  • Spanish government raises €67 million for Multiverse Computing for AI compression (EU Startups).

  • Trump walks back tariffs on a wide range of goods from Mexico and Canada for one month (NBC News).

Innovation moves fast—don’t get left behind. AI Devvvs helps you stay ahead by matching you with specialists in Artificial Intelligence and Machine Learning.

Anthropic offers policy suggestions to US government

Source: Anthropic

President Donald Trump’s approach to artificial intelligence has, thus far, focused more on taking things away than it has on introducing new regulatory regimes. 

Upon taking office, he rescinded former President Joe Biden’s executive order on AI and issued his own executive order on the same topic, an order that featured two main directives: one, that federal agencies must immediately cease all actions they’ve taken in support of Biden’s order, and two, the development of an AI action plan. 

Running through this hazy environment has been ever-increasing staffing cuts across government agencies, cuts that have hit the National Science Foundation (NFS), which is likely to slow U.S. AI research due to the amount of grants and funding the NSF distributes. There have likewise been reports that the National Institute of Standards and Technology (NIST) is bracing for a massive staffing cut that, if it comes, would likely completely cripple the U.S. AI Safety Institute (AISI), which, established by Biden’s executive order, is housed within NIST. 

Tech leaders, meanwhile, would prefer a degree of status quo.

What happened: The newly-minted AI Innovators Alliance — a coalition of 20 tech executives — urged the Trump Administration in a Thursday letter to support AI research and the development of standards for “evaluation and measurement, safety, transparency and security.” 

Part of this, according to the letter, involves supporting such institutions as the AISI and NIST. 

  • The coalition said that leading on global AI standards will give the U.S. a “competitive advantage” in the field. 

  • “Programs like AISI are squarely on the pro-innovation side,” Americans for Responsible Innovation Executive Director Eric Gastfriend said in a statement, adding that it helps “the startup ecosystem by advancing basic research, promoting open science, partnering with startups and setting standards.”

At around the same time, Anthropic — last valued at $61.5 billion — said that the AISI should be “preserved” as part of a governmental effort to design and implement robust national security evaluations for generative AI models, work that the AISI has already undertaken. 

The proposal was part of a larger policy recommendation submitted by Anthropic to inform the administration’s AI action plan. 

  • Across the 10-page document, Anthropic advocated for an increase of export controls on semiconductors and semiconductor-related materials to slow down adversarial countries. In light of DeepSeek’s model releases, however, experts have told me that it is decidedly unclear if the export controls we already have in place are working as intended. 

  • Songyee Yoon, an AI expert and the founder of the venture capital firm Principal Venture Partners, told me last month that harsher export controls will simply serve as motivation for China to build its own chips — and they have access to the rare earth minerals required to do so. 

Anthropic’s proposal additionally called for an expansion and enhancement of U.S. AI lab security and a more targeted, government-led exploration of the economic impacts generative AI is having so far. 

The proposal — which is predicated on Anthropic’s assumption that “powerful AI” will emerge under the Trump Administration — calls for “rapid AI procurement across the federal government” and a massive scale-up of energy supply exclusively for AI purposes. 

Anthropic expects that by 2027, training a single frontier GenAI model will require five gigawatts of power. The company called for the procurement of a minimum of 50 gigawatts of power, solely for the AI industry, by 2027. 

The document makes no mention of the limitations of current technology — algorithmic bias, persistent hallucinations and massive cybersecurity and data privacy vulnerabilities, to name a few — that ought to make the adoption of systems in the government challenging, to say the least. It also makes no mention of acquiring clean or cleaner energy to supply this truly tremendous influx of power for AI. 

In 2023, the U.S. consumed roughly 450 gigawatts of power in total. 

This industry is full of hype and hypotheticals alike, scenarios that blur the lines between science and fiction to promise dramatic near-term revolutions. 

I like to approach these scenarios with a question first of motivation. 

Anthropic, like every AI company, is motivated first and solely by a desire to sell its product, a motivation that likely increases with each multi-billion-dollar funding round it closes (investors must be appeased). 

In the context of that motivation, only two of these recommendations stand out to me; one, the push to boost governmental adoption of AI — specifically, to give every government worker an “AI-powered assistant” — and two, the massive increase in energy capacity, rooted in a desire to remove or reduce the permits required to construct new energy plants and data centers. 

One way to look at this is a desire to sell a story in which national dominance in AI is both vital and can only be provided by supporting AI startups (because of the specter of “powerful” AI … how convenient). This story makes it easier for the companies in question to gain users, sign more contracts and access the power needed to keep feeding their models. If they can get this baked into regulation, growth at all costs will become even more accessible, while the public health cost of data center pollution goes ignored, and while pesky environmental reviews get pushed out of the way. 

The ethos of Silicon Valley — motivated, not by altruism, but by very clear business incentives — is “move fast and break things.” 

We would do well to remember that. 

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The trees are too 'feathery...'”

Selected Image 1 (Left):

  • “The real image is always Image 1 😂” — until it isn’t, and then I’ve got you on the ropes…

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Since today happens to be my birthday …

How many March birthdays do we have in here?

Login or Subscribe to participate in polls.

If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.