- The Deep View
- Posts
- ⚙️ Figure AI announces robotics breakthrough
⚙️ Figure AI announces robotics breakthrough

Good morning, and happy Friday. Today marks my 200th edition — thank you all so much for tuning in.
Today, we’re talking about robots.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🔬 AI for Good: Viral discoveries
📊 OpenAI says it has surpassed 400 million weekly active users
🏛️ US AI Safety Institute’s future in question as NIST braces for staffing cuts
🤖 Figure AI announces robotics breakthrough
AI for Good: Viral discoveries

Source: Unsplash
Last year, researchers at the University of Sydney used machine learning to discover a batch of more than 160,000 new virus species.
What happened: The researchers built a deep learning algorithm called LucaProt, which they used to compute vast quantities of genetic sequence data, including virus genomes. That computation process enabled the discovery of 160,000 new virus species.
Senior author professor Edward Holmes said that most of those species had already been publicly sequenced, “but they were so divergent that no one knew what they were. They comprised what is often referred to as sequence ‘dark matter’. Our AI method was able to organize and categorize all this disparate information, shedding light on the meaning of this dark matter for the first time.”
It represents an opportunity to massively speed up the rate of virus discovery, which is traditionally time-consuming.
Why it matters: Holmes said that the breakthrough opens up “a world of discovery. There are millions more to be discovered, and we can apply this same approach to identifying bacteria and parasites.”
These viruses, though commonly associated with human illness, are also found in extreme environments around the world. And as is often the case, scientific advancement begins with discovery and identification.

Start speaking a new language by Spring!
Through award-winning lessons, addictive games, and more bonus content, you can start speaking a new language in 3 weeks with Babbel— just in time for Spring.
OpenAI says it has surpassed 400 million weekly active users

Source: OpenAI
The news: OpenAI COO Brad Lightcap said Thursday that ChatGPT recently surpassed 400 million weekly active users. This is a massive (33%) increase over the 300 million weekly active users that OpenAI had noted as recently as December.
Twitter, for comparison, has about 500 million monthly active users, and Instagram has around two billion. It’s not clear how OpenAI measures this number — active user numbers, due in part to a rising quantity of bots, can be relatively easily gamed, especially considering that the metric can refer to any seven day period in a given month.
OpenAI didn’t respond to a request for comment regarding this.
Lightcap added that this growth is reflected in the enterprise as well, saying that ChatGPT now has more than two million business users.
“People hear about it through word of mouth. They see the utility of it. They see their friends using it,” Lightcap told CNBC. “There’s an overall effect of people really wanting these tools, and seeing that these tools are really valuable.”
The landscape: The apparent growth comes as the startup reportedly remains involved in talks to raise $40 billion in fresh funding from Softbank at a roughly $300 billion valuation, despite the $5 billion in losses — on $3.7 billion in revenue — that OpenAI recorded last year.
OpenAI CFO Sarah Friar told CNBC Thursday that it’s “within the realm of possibility” for the startup to hit $11 billion in revenue this year.

Want to feel lighter?
In a time of constant movement and change, it can be hard to stick to a routine, leaving many feeling frustrated by their inability to maintain healthy habits.
This is where Seed’s DS-01® Daily Synbiotic comes in. Taking DS-01 daily is a simple habit that can help people take back control during the busiest time of the year.
DS-01® Daily Synbiotic combines 24 clinically and scientifically studied probiotic strains and a non-fermenting prebiotic to support gut health and enhance nutrient absorption to impact whole-body health.
With vegan ingredients and sustainable packaging, it’s a powerful, research-backed addition to your daily wellness routine.
Take the first step toward better health — start your DS-01® journey today.


Microsoft is getting ready to boost its server capacity to host OpenAI’s upcoming GPT-4.5 (and GPT-5) models.
The IRS has reportedly begun mass firings as tax season continues to heat up.

Amazon to gain creative control of James Bond franchise from Broccoli family (CNBC).
DOGE puts $1 spending limit on government employee credit cards (Wired).
Kash Patel confirmed to lead the FBI (Semafor).
Turkey’s translators are training the AI tools that will replace them (Rest of World).
Salesforce in talks with Microsoft, Oracle and Google about cloud deal to handle AI (The Information).

Join the companies that scale smarter with elite talent from Latin America.
With Athyna, you get access to top-tier Full Stack Developers like Santiago in days, not months.
Hire pre-vetted talent and save 70% on salaries—ready to start now!
US AI Safety Institute’s future in question as NIST braces for staffing cuts

Source: NIST
The news: Axios reported this week that all probationary employees at the National Institute for Standards and Technology (NIST) — which houses the U.S. AI Safety Institute — are bracing to be “fired imminently.”
Sources told Axios that they expect 497 employees to be let go, a move that they believe will cripple the U.S. AISI in addition to NIST’s Chips for America program.
NIST, the U.S. AISI and the Department of Commerce did not respond to requests for comment.
While probationary employees generally refer to recent hires, most of whom operate on a year-long probationary period, probation is not limited to new hires. If an employee jumps to a new agency, where they begin performing a job different from their old one, that employee could be subject to another year-long probationary period.
As Stuart Buck, the executive director of the Good Science Project, said: “firing the newest (‘probationary’) government employees is a great way to cripple new fields (such as AI).”
What it means: The AISI was created to head up and oversee voluntary model testing. The organization has signed agreements with OpenAI and Anthropic to evaluate the safety of their models before they are deployed. Specifically, AISI focuses on addressing potential national security risks, which, as Aviya Skowron, the head of policy and ethics at Eleuther, pointed out, cannot be conducted by private companies due to the classified nature of national security information.
Beyond the AISI, the cuts could further cripple the government’s 2022-era effort to boost semiconductor investment in the U.S., with cuts set to include 57% of CHIPS staff focused on incentives and 67% of CHIPS staff focused on research and development.
Samuel Hammond, a senior economist at the Foundation for American Innovation, said that “if this happens, the U.S. will lose its in-house capacity for model testing, and thus our role in setting AI standards that align with Western values. Projects to rebuild our chip industry will be kneecapped. AI dominance will shift to China.”
The landscape: Elizabeth Kelly, the former director of the AISI, stepped down two weeks ago. This all comes in the wake of President Donald Trump’s efforts to roll back former President Joe Biden’s executive orders on AI, which focused both on advancement as well as safety — with a specific focus on civil rights.
Figure AI announces robotics breakthrough

Source: Figure AI
The news: Shortly after severing its partnership with OpenAI, robotics firm Figure launched and demoed a new Vision-Language-Action (VLA) model called Helix, a system that, according to Figure, overcomes “multiple longstanding challenges in robotics.”
The details: The demo video — which is underscored by the most futuristic-sounding music imaginable — features two Figure robots being tasked with putting away groceries that they have apparently never seen before. Over the course of the ensuing two minutes, the robots open both a cabinet and a refrigerator, putting items ranging from an onion to a bag of shredded cheese in their proper place.
Figure said that their robots powered by Helix — a model built entirely in-house — “can now handle virtually any household item.” Figure called this a “pivotal” step in bringing its robots to the home.
In a blog, Figure said that the model is the first of its kind to operate simultaneously on two robots, enabling robotic collaboration toward a shared goal. The company added that it operates entirely on on-board, low-power-consumption GPUs, making it ready for commercial deployment.
The idea was to translate the information found in vision language models directly into robotic actions, something Figure apparently achieved by leveraging two neural networks, one for slower “reasoning” and the other for faster, reactive control.
Dr. Kostas Bekris, a computer science professor at Rutgers who specializes in robotics, called the demo an “exciting development” for Figure, saying that the approach is likely to further push the industry toward the trend of developing large, unified models for robotics.
“At the same time, the demonstration is rather well-choreographed,” he told me. “The objects are placed rather sparsely on a tabletop surface without any occlusions so that the picking task is simplified. The generalization claims would require independent verification. My guess is that it would be easy to set up similar object manipulation problems where the same robots and models will fail.”
Figure said it trained the model on roughly 500 hours of “high quality … diverse teleoperated behaviors,” adding that the training process is “highly efficient.”
This essentially means that Figure had people remotely operate their robots to perform a variety of tasks for a total of 500 hours, then trained its neural networks to imitate those behaviors. This training by imitation represents the dominant approach in the field at the moment.
Much about the report and the demo remains unclear — none of the information provided, as Bekris noted, has been independently tested or verified. Bekris added that the blog “does not describe any efforts related to the safety verification of the output from the machine learning models.”
Dr. Christian Hubicki, a robotics research scientist at Florida State University, told me that “reliability remains the primary stumbling block for general-purpose robots. We have very little data on that from these companies … These demonstrations are ever-tantalizing of what could be possible someday soon, but we need to be able to count on these robots to do what we ask.”
The landscape: Last year, Figure raised $675 million at a $2.6 billion valuation. At around the same time, the company announced its first commercial deal, which brought Figure’s robots to BMW’s automotive production line. Additional corporate clients remain unknown.
Reuters reported last week that Figure is in talks for new funding at a $39.5 billion valuation.
The company’s “master plan” is to eventually deploy billions of its humanoid robots in areas such as manufacturing and shipping, before sending them off to "build new worlds on other planets.”
“No doubt I applaud Figure AI and its competitors in the field for their progress in the past couple years. Training a single network, end-to-end, to accomplish this variety of tasks with limited data shows we have not hit a ceiling on the variety of tasks robots can complete,” Hubicki said. “But we will need a fundamental leap in the reliability of these algorithms before I trust them to load my dishwasher.”

The hardware definitely looks impressive, here, though I remain wary — and somewhat frustrated by — these demo videos, with their futuristic soundtracks and high production value.
It takes the lack of independent verification to a whole new level, and, in the robotics world, misleading demonstrations have become quite common (I’m looking at you, Tesla).


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“This picture has tiny car headlights on roads, and the other picture had a main building that was a lot of copy-paste arches. That said, this was still a tricky one! the correct picture had a 'fairy lights' look to it.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on the Humane AI Pin:
40% of you never got a pin in the first place, so you’re good. 35% of you are thoroughly unsurprised by this.
And five of you have a Humane pin that is on its way to the garbage.
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.