• The Deep View
  • Posts
  • ⚙️ Tesla stock plummets following robotaxi unveil

⚙️ Tesla stock plummets following robotaxi unveil

Good morning. Hope you all had a nice weekend.

If you were Elon Musk — are you, by the way? That’d be something — your weekend was pretty mixed. The whole robotaxi thing didn’t go over very well, but, on Sunday, Musk’s SpaceX launched its Starship rocket for the fifth time.

The launch tower caught — literally — the rocket’s booster upon its return to Earth.

You win some, you lose some.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: Search and rescue

Source: Unsplash

As we’re about to get into in quite a bit of detail, generative artificial intelligence is really good at pattern recognition. This capability makes current generative algorithms — when combined with drones — a natural tool in the arsenal of wilderness search and rescue teams. 

The details: Earlier this year, a team of researchers at Scotland’s University of Glasgow designed such an algorith. Then, they pitted it against existing search algorithms and search patterns. 

  • The model was trained on data from thousands of search and rescue operations around the world, importantly including specific details of each incident, such as where the lost person was found. 

  • The new model handily outperformed both common search methods and less sophisticated algorithms, finding simulated missing persons 19% of the time, compared to 8% and 12% of the time, respectively, for the benchmark solutions. 

The system isn’t yet ready for mass deployment; it needs quite a bit more testing, plus there are ethical considerations regarding surveillance without consent that such a system poses. But, as the lead researcher wrote: “This result is a clear indication that algorithms could improve the time to find a missing person and thus save lives. Whilst the experimental results are not concrete proof that humans are redundant, it does highlight the potential in fully autonomous drones used in real search missions.”

This game changing device measures your metabolism

Health plans and fads are all over social media. The problem is that health is complicated and unique, meaning the only health plans with a shot of actually working need to be personalized.

That's where Lumen comes in. It’s the world’s first handheld metabolic coach that measures your metabolism with a simple breath. No guesswork, just personalized data to help you optimize your health.

Based on your daily breath measurement, Lumen’s app lets you know if you’re burning fat or carbs, then gives you tailored guidance to improve your nutrition, workouts, sleep and even stress management.

Lumen — which has logged 55 million metabolic measurements — builds you a personalized, scientifically sound and empirically complete health plan based on your data.

Study: LLMs do not reason

Source: Created with AI by The Deep View

A core element of the AI sector — and one that we revisit often — has to do with reasoning and intelligence. The question at hand is simple, though full of impactful implications: do Large Language Models (LLMs) understand their output? Can they reason? Or are they simple statistical pattern-matchers operating at scale? 

Thus far, there has been plenty of evidence — many of which we’ve covered here — that LLMs entirely lack understanding. They are probabilistic machines, operable only because of the immensity of their training data. 

What happened: A (preprint) of a new study conducted by a team of Apple researchers adds more weight to the existing pile of evidence that LLMs cannot actually reason. 

  • The researchers designed a new benchmark, specifically complete with multiple variants, to test the mathematical reasoning capabilities of LLMs. 

  • Testing a series of state-of-the-art open and closed models, the researchers found a massive variance in LLM performance based on simple variations. For example, performance decreased by 10% when the researchers changed the name of the proper nouns listed in a given word problem. 

The team also found that as the questions got harder, not only did performance drop, but this variance in performance also increased. When the researchers added a sentence to the word problem that was irrelevant to the solution, performance dipped further, indicating that LLMs don’t understand mathematical concepts. 

“We found no evidence of formal reasoning in language models including open-source models like Llama, Phi, Gemma and Mistral, and leading closed models, including the recent OpenAI GPT-4o and o1-series,” lead researcher Mehrdad Farajtabar said. “Their behavior is better explained by sophisticated pattern matching.” 

He added that scaling models and improving training data will likely result in better pattern-matchers, not better reasoners. 

Imagine Virtual Reality without Headsets

And that's just the beginning.

Elf Labs is making history in the $2 trillion media & entertainment space. By leveraging 12 advanced tech patents, they’ve developed a groundbreaking way to create immersive 3D experiences that seamlessly connect the physical and digital worlds—without the need for headsets.

Even more exciting...

After 100+ historic victories at the US Patent & Trademark office, they own trademarks to some of the highest grossing character IP in history.

Now, they’re merging this next-gen tech with their iconic characters like Cinderella, Snow White, Little Mermaid, Pinocchio and more.

With two projects already funded, there’s limited space left. Become an Elf Labs shareholder today—before the round closes on Oct. 30. Plus if you invest now, you can earn up to 40% bonus shares & other exclusive investor perks!

  • What internet data brokers have on you — and how you can start to get it back (CNBC).

  • SpaceX’s Starship test completes with a remarkable ‘chopstick’ booster catch (The Verge).

  • Who will save the kids from bad tech? (The Information).

  • Microsoft Azure CTO: US data centers will soon hit size limits (Semafor).

  • Philippine chipmakers are embracing automation — and leaving low-skilled workers behind (Rest of World).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

  • Direqt: A chatbot platform for reader engagement through personalized conversations on websites.

  • NexusGPT: A tool to create and integrate custom AI agents for workflow automation.

Poll: US voters concerned about OpenAI’s o1 release

Source: Created with AI by The Deep View

Last month, following months of increasingly dramatic rumors, OpenAI released o1, a powerful new Large Language Model (LLM) that, in a marked difference from its other models, was designed to ‘think’ before it generates a response. The technical idea here is called “chain of thought,” and the concept is to break down a query into a series of steps, an approach that improves the quality of output. 

o1, contrary to the hyped-up marketing language written into the interface itself, is not actually thinking. But it is a far more powerful model than any OpenAI had released before; OpenAI’s own system card found that the model posed a ‘medium’ risk of persuasion and biological threat creation. The company therefore classified the model as a ‘medium’ risk model, the highest level a model can reach while still being safe, in OpenAI’s eyes, to deploy. 

What happened: A recent poll of U.S. voters by the Artificial Intelligence Policy Institute (AIPI) found that the majority of voters are concerned about the model. 

  • After being presented with details regarding o1’s capabilities, 64% of voters said that o1 is more powerful than what they thought AI was currently capable of; 64% also believe that independent auditors should have had more time with the model before it was released. 

  • 57% of voters believe outside reviews of AI models pre-release ought to be mandatory, and 53% of voters said they were more concerned than excited about OpenAI’s o1. 

"Contrary to some elite discourse about AI being overhyped, this new survey data makes it abundantly clear that the general public is feeling scared, not underwhelmed, about AI development,” Daniel Colson, the executive director of the AIPI, said in a statement.

Tesla stock plummets following robotaxi unveil 

Source: Tesla

In the case of Elon Musk’s highly-anticipated robotaxi unveil, it seems that investors were not sold on theatrics. Shares of Tesla dipped hard following the event, closing Friday down nearly 9%. Funnily enough, shares of Uber spiked nearly 11% at the same time, an indication that investors are pretty confident Tesla’s robotaxis won’t put Uber out of business. 

Let’s get into what happened (or didn’t happen) that left investors so unhappy. 

  • At the event, which took place at a Warner Bros. studio in California, Musk unveiled the Cybercab and the Robovan. 

  • The Cybercab, a flashy, two-seater robotaxi, according to Musk, will enter production in 2026 and will apparently cost less than $30,000. Uber drivers, Musk said, would become like shepherds, tending their flocks of robotaxis. The Robovan — the Cybercab’s big brother — can take up to 20 passengers. The vehicles will come assembled sans steering wheel or brakes. 

Prototypes of the vehicles — alongside remote-operated Optimus robots — roamed around during the event.

But Tesla’s robotaxi fleet will start — if you take Musk at his word — with its Model 3 and Model Y vehicles, which Musk said will go fully-autonomous in California and Texas next year. Of course, this would require pretty massive regulatory approval, something Tesla does not yet have. 

The problem is that, once again, Musk delivered plenty of theatrics without any details. Stages of regulatory approval remain unclear and technical advancements regarding Tesla’s autonomous tech remain lacking. 

Tesla’s current system, which it misleadingly calls full-self driving (FSD), is a Level 2 autonomous system, meaning it requires the hands-on, eyes-on attention of the driver. It is not at all clear how Tesla plans to jump from a Level 2 to a Level 4 or Level 5 fully-autonomous system, especially without adjusting the vehicle’s sensor array. 

Waymo, which operates as a Level 4 (ish) system, comes loaded with radar and lidar in addition to computer vision tech (and a team of humans standing by for remote assistance); the lidar is essential as a redundancy layer since computer vision and artificial intelligence have a number of dangerous vulnerabilities. But Musk doesn’t like lidar. Tesla’s self-driving tech uses only computer vision, something that some experts have told me will prevent Tesla from ever jumping beyond its current autonomy level. 

None of these predictions bore any fruit. The reality is that the self-driving problem is far more complex than it seems, requiring a ton of computing power and highly advanced algorithms to operate with even a semblance of autonomy. But full autonomy — especially at any kind of scale — is far away, due to that confounding combination of enormous technical and regulatory hurdles. 

"We found Tesla's Robotaxi event to be underwhelming and stunningly absent on detail," Bernstein analyst Toni Sacconaghi said in a note Friday.

Self-driving cars sound good. But they’re still years away. It’s a really hard technical challenge that requires concerted, cautious efforts. It doesn’t just happen. And neural networks alone aren’t enough. 

I don’t think Tesla will have it solved anytime soon. And I don’t think regulators will be eager to let unsupervised Teslas roam their streets until it is solved.

I don’t think we’ll see the Cybercab for a while.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “The texture of image #2 gave it away. Again.”

Selected Image 2 (Right):

  • “This image looks less perfect than the first picture which looked framed. I find it incredible that the AI image looks much more realistic than the former.”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on persuasive memes:

A third of you haven’t really encountered persuasive memes online, but an equal number of you encounter them all the time. Interesting. 20% of you said that, now that you think about, you do run into persuasive memes on social media.

All the time:

  • “Sayings of ‘wisdom’ are very common, and often application is left in the comments underneath the image.”

Something else:

  • “Don't know enough to identify them.”

What's your timeline for Elon's robotaxis?

Login or Subscribe to participate in polls.