• The Deep View
  • Posts
  • ⚙️ Mark Zuckerberg one-ups OpenAI ... again

⚙️ Mark Zuckerberg one-ups OpenAI ... again

Good morning. Did anyone see the Aurora this weekend?

No luck for me, but then again, I don’t really stay up that late in the first place.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

AI for Good: Methane mapping

Source: EDF

When we talk about global warming, we often talk about a culprit in the form of carbon dioxide emissions. But there’s more than one type of greenhouse gas, and methane is a major contributor to global warming

What happened: Earlier this year, the Environmental Defense Fund (EDF) launched a satellite — the MethaneSat — that will map, measure and track methane in part using AI-enabled software from Google. 

  • Orbiting Earth 15 times each day, the satellite is able to regularly analyze methane levels — identifying the sources of methane emissions — around the world. 

  • As part of this, Google and the EDF are also creating a map of oil and gas infrastructure, which, coupled with the methane detection algorithms, can identify instances of methane leaks. 

Why it matters: The key to mitigation is knowledge. Here, step one is tracking methane sources, step two is targeted reduction. 

Guidde - Create how-to video guides fast and easy with AI

Tired of explaining the same thing over and over again to your colleagues?

It’s time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.

  • Share or embed your guide anywhere

  • Turn boring documentation into stunning visual guides

  • Save valuable time by creating video documentation 11x faster

Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover, and call to actions.

The best part? The extension is 100% free.

Waymo’s big expansion

Source: Waymo

Self-driving firm Waymo last week announced a partnership with Hyundai Motor Company. 

The details: As a first step together, Waymo will outfit Hyundai’s all-electric Ioniq 5 SUV with its latest self-driving tech; the vehicles will gradually join Waymo’s fleet of robotaxis. 

  • The companies will begin on-road testing in late 2025; the vehicles will become available over the following years. 

  • The partnership marks a significant step in Waymo’s expansion, which has been scaling rather cautiously as it seeks to maintain its relatively clean self-driving track record. 

The reality of self-driving cars and Waymo is that, while the safety numbers look good right now, the scale is hyper-specific and quite small. The challenge of a reliably safe self-driving car is the same challenge of a reliable generative AI model — and, as of right now, the science is not clear that we will ever be able to achieve genuine reliability based on the current architecture. 

Elf Labs, a next-gen tech company—with rights to billion-dollar character IP like Cinderella, Little Mermaid, & Snow White—is now opening its doors to investors for a limited time.

With 100+ trademarks, 400+ copyrights, and 12 revolutionary tech patents, Elf Labs is leading the way in immersive entertainment.

Think hyper-realistic AI-powered 3D worlds at scale. Think global licensed merchandise in the $350B Licensing Industry.

But the clock is ticking… Invest in Elf Labs before 10/30 to get in on the ground floor! 

  • Why women in tech are sounding the alarm (The Information).

  • Telegram gave U.S. user data to the cops (404 Media).

  • WordPress CEO Matt Mullenweg goes ‘nuclear’ on Silver Lake, WP Engine (CNBC).

  • Greening of Antarctica shows how climate change affects the frozen continent (Ars Technica).

  • Irish privacy regulator probes Ryanair's use of facial recognition (Reuters).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

Meta’s data limits 

Source: Unsplash

Before Meta was Meta — and before Meta was using our personal data to train its AI models — the social media giant was using our personal data to inform targeted advertising campaigns and control our individual feeds. 

This practice, in the U.S., has been allowed to proceed largely unfiltered. But the European Union has enacted several laws regarding data processing, namely the GDPR. 

In 2014, Max Schrems, as an individual, filed a lawsuit against Meta Ireland, accusing the company of a number of GDPR violations. Last week, the Court of Justice of the European Union (CJEU) decided on some of the key questions posed by the suit. 

  • The CJEU said that online social networks similar to Facebook “cannot use all of the personal data obtained for the purposes of targeted advertising, without restriction as to time and without distinction as to type of data.”

  • This is an idea enshrined in the GDPR as “data minimization,” and it impacts even users who consent to targeted advertising. 

Why it matters: "Meta has basically been building a huge data pool on users for 20 years now, and it is growing every day,” Schrems’ lawyer, Katharina Raabe-Stuppnig, said. “Following this ruling, only a small part of Meta's data pool will be allowed to be used for advertising — even when users consent to ads. This ruling also applies to any other online advertisement company, that does not have stringent data deletion practices."

AI Startups get up to $350,000 in credits with Google Cloud

For startups, especially those in the deep tech and AI space, having a dependable cloud provider is absolutely vital to success.

Fortunately, Google Cloud exists. And it offers an experience — the Google for Startups Cloud Program — specifically designed to make sure startups succeed.

  • The program importantly offers eligible startups up to $200,000 in Google Cloud Credits over two years. For AI startups, that number is $350,000.

Beyond the additional cloud credits eligible AI startups get access to, AI startups will also get access to Google Cloud’s in-house AI experts, training and resources.

This includes webinars and live Q&As with Google Cloud AI product managers, engineers and developer advocates, in addition to insight into Google Cloud’s latest advances in AI.

Program applications are reviewed and approved based on the eligibility requirements here.

Mark Zuckerberg one-ups OpenAI …  again

Source: Meta

Meta last week unveiled Movie Gen, a new generative AI foundation model capable of generating video, audio and images. It has additional capabilities along the lines of personal video generation and precise video editing, something demonstrated by CEO Mark Zuckerberg in an Instagram video that gave him an AI-generated Roman gladiatorial outfit while he was leg pressing in the gym. 

The model will be made available on Instagram in 2025. 

The details: The idea is to “use simple text inputs to produce custom videos and sounds, edit existing videos and transform your personal image into a unique video,” according to Meta. 

  • The social media giant said in a research paper that accompanied the release that the model outperforms similar state-of-the-art models, including Kling 1.5 and OpenAI’s Sora, which has yet to be released. 

  • “While there are many exciting use cases for these foundation models, it’s important to note that generative AI isn’t a replacement for the work of artists and animators,” Meta said in a blog post heralding the arrival of the new model. “We’re sharing this research because we believe in the power of this technology to help people express themselves in new ways and to provide opportunities to people who might not otherwise have them.”

The training: Meta’s researchers said that the key to this newest model was scale; “scaling the training data, compute and model parameters of a simple Transformer-based model trained with Flow Matching yields high-quality generative models for video or audio.”

In line with this, the model was pre-trained on “internet scale” image, video and audio content. Meta did not clarify whether this data was licensed or scraped off the internet without the knowledge, permission or compensation of its original creators. The legality of the latter is still being battled in court across a number of different copyright infringement lawsuits. 

Part of the pre-training for the model included 100 million video-text pairs and one billion image-text pairs. The sources of these images and videos remain unclear, though Meta has been consuming the posts of its Facebook and Instagram users for the express purpose of training its AI models. This, the company has said, is the thing that sets its models apart from the competition. 

While Meta did not — in the 92 pages of its research paper — reference electricity consumption or carbon emissions associated with the training or operation of the new model, the company did say the models were trained on up to 6,144 H100 GPU chips. 

The release of this model is interesting for the same reason that the release of Meta’s Llama 3.1 is interesting; its very existence erases the idea of a moat, and makes it clear that OpenAI doesn’t have one. 

OpenAI is looked at as the best, but OpenAI is closed off and charges for access to its models. And according to some reports, the company is planning to hike that cost of access soon. 

Meta — following more of an open-source approach — has seemingly achieved technical parity with OpenAI’s models, only Meta’s models are available for free. Plus, Meta doesn’t need to make sure its models turn a profit; the cost of their construction is subsidized by Meta’s enormous social media advertising business. 

The release of Movie Gen — coming while OpenAI’s Sora remains conspicuously absent — might push Meta yet another rung above OpenAI. Again, the existence of Meta’s models calls into question OpenAI’s business model and its road to eventual profitability: why pay for something you can access for free? 

Business aside, I remain concerned about the free accessibility of increasingly convincing generative tools, which we have seen are fueling a tide of misinformation and abusive deepfakes. 

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “One guy in image 2 seems to have extra legs.”

Selected Image 2 (Right):

  • “Image 2 is a pretty good fake!!”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on AI coding assistants:

40% of you said you use AI coding assistants sometimes, and that they’re just okay. 20% of you said you don’t need them and you don’t want them; 15% said you use them all the time and they’re amazing.

Something else:

  • “It's good for info snippets on right use when learning a new language, but it's terrible at keeping the overview for any decent size program: it freely deletes comments, pieces of code it doesn't understand, changes procedure and variable names etc. Spend more time debugging than ever...”

Are you excited about Movie Gen?

Login or Subscribe to participate in polls.