- The Deep View
- Posts
- ⚙️ Interview: How new tech is making digital trust complicated
⚙️ Interview: How new tech is making digital trust complicated

Good morning. On my way back to Jersey today after a whirlwind of a conference here at HumanX in Vegas. Will have some major takeaways for you tomorrow, but I can tell you right now that an increasing focus across the industry seems to be around trust; how it can be earned and how it can be leveraged to drive adoption.
With that in mind, the ethical challenges, the security challenges and the reliability challenges are all top of mind for developers and technologists.
And those concerns just don’t have an easy fix.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
✈️ AI for good: Smarter airplanes
💻 Google’s latest model can be run on a single GPU
🚨 Interview: How new tech is making digital trust complicated
AI for good: Smarter airplanes

Source: Unsplash
Scientists at the German Aerospace Center (DLR) have been exploring technological approaches to improve air travel. In this case, we’re talking about comfortability and fuel efficiency, and tying it all together is a careful level of automation.
The details: The project, which ran from 2020 to 2024, centers around something called intelligent load control. The basic idea is that a combination of specialized surface sensors and algorithms will enable planes to react “in advance to gusts of wind and maneuvers by adjusting control surfaces and flaps at lightning speed.”
The sensors include laser systems and Lidar sensors, which simultaneously serve as a core technology in most self-driving vehicles. 30,000 feet up, the Lidar measures wind fields and can “detect approaching gusts at an early stage.”
Combined with algorithms, this means that a given aircraft can react to wind changes proactively by automatically adjusting rudders and flaps.
"Through the clever interaction of highly developed control surfaces and modern sensors, we can better absorb turbulence, minimize the load on the aircraft structure and thus develop more efficient aircraft," Dr. Lars Reimer, project manager at the DLR Institute of Aerodynamics and Flow Technology, said.
The tech can reduce fuel consumption by more than 7%, according to the DLR.
Wind tunnel tests and simulations have so far verified the efficacy of the approach; the next step involves getting some research craft into the air.
This comes as clear-air turbulence is steadily getting worse, due to climate change.

Disrupting a Trillion Dollar Market
Isn't it time we build houses like everything else?
BOXABL thinks assembly line automation can lead to building homes faster, at a higher quality, for a lower cost. Protected by 53 patent filings, BOXABL has raised over $170M since 2020 from over 40,000 investors.
They believe BOXABL has the potential to disrupt a massive and outdated trillion dollar building construction market. But the round on StartEngine is closing soon!
Don’t miss out on investing in BOXABL for $0.80/share on StartEngine. Click here to learn more.
Google’s latest model can be run on a single GPU

Source: Google
The news: Google on Wednesday unveiled Gemma 3, a multimodal upgrade to its Gemma family of lightweight, purportedly open generative AI models.
Like Gemma 2, which was released last summer, Gemma 3 is designed to run on standard consumer hardware; Google called it the “most capable model you can run on a single GPU or TPU.”
The details: Though the architecture of the model is largely the same as the architecture behind Gemma 2, Google explained in an associated research paper that it undertook a “novel post-training approach that brings gains across all capabilities, including math, coding, chat, instruction following and multilingual.”
“The resulting Gemma 3 instruction-tuned models are both powerful and versatile,” according to the firm, “outperforming their predecessors by a wide margin.”
The paper has not been independently verified.
That approach involves distillation — a technique DeepSeek applied to great effect, in which a smaller ‘student’ model learns from a larger ‘teacher’ model — as well as reinforcement learning.
The models — which range in size from one billion to 27 billion parameters — performed well on benchmarks, according to Google, achieving rough parity with some of Google’s flagship Gemini models. In preliminary evaluations on LMArena’s leaderboard, the 27 billion parameter iteration of Gemma 3 outranks o3-mini, DeepSeek’s V3 and Claude 3.7.
In a streak of transparency, the researchers estimated that training the models resulted in the emission of 1,497.13 tons of carbon dioxide, though it remains unclear how that translates to the carbon and electrical intensity associated with actually operating the model.
And though Google refers to Gemma 3 as an “open” model, it’s really open weights — source code and training data remain obscured, two necessary elements of securing a truly open-source badge.
The landscape: This is more evidence of the latest trend, in which reinforcement learning and model distillation can be leveraged to make smaller models significantly more capable, which opens up the abilities of large models at a far lower cost. It is also more evidence of a ceaseless benchmark leap-frogging race being run by developers.

Design and ship your dream site with Framer. Zero code, maximum speed.
Just publish it with Framer. Beautiful sites without code — easily.
Join thousands of designers and teams using Framer to turn ideas into high-performing websites, fast. The internet is your canvas.
Framer is simple to learn, and easy to master. Check out the Framer Fundamentals Course to build on your existing design skills to help you quickly go live in Framer. Perfect for designers transitioning from Figma or Sketch.


AI for Robotics: Google DeepMind on Wednesday launched two new AI models designed for real-world robotics, dubbed Gemini Robotics. DeepMind said that the models will “enable a variety of robots to perform a wider range of real-world tasks than ever before.”
Market update: After February’s consumer price index came in softer than expected, markets had a much-needed day of rebounds and green tickers, led by the tech sector (which most recently led a sell-off).

How BYD undercuts Tesla around the world, by the numbers (Rest of World).
Chinese companies rush to put DeepSeek in everything (Wired).
All this bad AI is wrecking a whole generation of gadgets (The Verge).
TikTok executives bail as ban deadline approaches (The Information).
Salesforce pledges to invest $1 billion in Singapore over five years in AI push (CNBC).
Interview: How new tech is making digital trust complicated

Source: Unsplash
In 2023, secure messaging app Signal introduced quantum-resistant encryption to its cryptographic protocol. The idea at the time was simple; if sufficiently stable and powerful quantum computers become genuinely efficacious — and worse, widespread — in the near future, the encryption algorithms that today secure the world will become vulnerable.
“As if this whole AI thing wasn't enough, there's this next massive meteor hurling towards our digital planet, and that's quantum computing,” Dr. Amit Sinha, CEO of DigiCert, told me. “I'd call it a Y2K times 10 event without a date.”
The specific algorithm at risk here is RSA encryption, a widely used form of digital encryption that uses math to stymie current computers. But the whole idea of quantum computers is that they would be able to quickly solve calculations that would take classical computers somewhere between decades and uncountable centuries to solve, meaning that mathematical barriers could quickly become quantum-leapable chasms.
And since RSA is “keeping the internet together,” according to Sinha, “that’s kind of a meltdown scenario.”
And it’s one that companies are increasingly waking up to.
About a year ago, Apple, following Signal’s lead, introduced a quantum-safe security upgrade to its iMessage encryption protocol, saying that although such powerfully threatening quantum computers don’t exist today, “attackers can collect large amounts of today’s encrypted data and file it all away for future reference. Even though they can’t decrypt any of this data today, they can retain it until they acquire a quantum computer that can decrypt it in the future.”
Gartner said last year that companies should start the transition to quantum-safe encryption today, or better yet, several years ago, saying that “quantum computing will render traditional cryptography unsafe by 2029.”
Part of the problem here, according to Sinha, is that preparing for the impending quantum threat involves more than “just a quick software update … you need to think about in a holistic way, across your infrastructure, across your cryptographic inventory,” he said.
The other part of the problem is that companies today don’t really have a clear field to devote time and resources to future threat preparation when they’re facing active cybersecurity attacks of a different sort: generative AI.
DigiCert’s tech is leveraged by companies around the world as a means of verifying the authenticity of software, devices, machines and content. This includes everything from verifying to users that they are on the real Amazon.com, to ensuring that an electronically signed contract remains free of subtle forms of digital tampering. Before generative AI exploded, that was a vital challenge. Since the GenAI boom, it’s become so much harder.
“Now with AI, the biggest problem is what's real and what's fake,” Sinha said. “I think we will live in a world where the default assumption is all media out there is fake, and only media that has content provenance, and clear indicators of what's the origin of that media, will be trustworthy.”
Since so much media today is consumed in embedded forms, verifying levels of trustworthiness is even more difficult.
Fake social media accounts designed to look like, say, the New York Times, posting fake videos of politicians, for instance, could be convincing, especially to folks who aren’t trained to pay very careful attention to the details.
But the threat, according to Sinha, is far more widespread than the veracity of the media people consume.
“Think of businesses,” Sinha said. “How does an insurance company know that the pictures that they got about an accident haven’t been tampered with to claim more damages? Or you might be a juror sitting in a court of law wondering, is that crime scene footage real or has it been edited or manipulated?”
This makes initiatives like the Content Provenance Initiative, supported by Adobe, Microsoft and DigiCert, among many others, a vital program, according to Sinha.
Aside from the myriad individual, personal cybersecurity risks posed by generative AI, companies are dealing with an environment of decreasing digital trust and ever-increasing digital risk, an obstacle-laden course carrying us toward the “meteor” that is functional quantum computers.
It is a world that requires more than just prudent, technological upgrades through quantum-safe encryption protocols and widespread, tamper-resistant content provenance initiatives. It is a world that, as many cybersecurity researchers have told me in the past, requires people to adopt a mindset of zero trust when it comes to the consumption of digital information, from fraudulent texts demanding a stack of Amazon gift cards to viral videos of world leaders fist-fisting.
“It's about education, it's about making sure that organizations are prioritizing this,” Sinha said. “This isn't day-to-day firefighting. It needs to go up the food chain, and (companies) need to prioritize this.”


Which image is real? |



🤔 Your thought process:
Selected Image 2 (Left):
“The engine in image #1 looks wrong somehow.”
Selected Image 1 (Right):
“Image 2 looked too glossy; I got fooled!”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on AI and coding:
30% of you think AI will augment coders and software engineers.
22% think that Dario is overselling it and 22% think they will be replaced for sure.
Only 6% of you said that you don’t use GenAI for coding help.
Augmentation:
“Augmentation is certainly the next step. If you currently have skills, I would be less worried if you adapt. If you are newly getting into the field of programming based upon the growth of the last 20 years, I would tell you that phase is over.”
Augmentation:
“It will never replace, but help enhance our ability to grow quickly.”
Do you still trust digital media by default? |
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
Boxabl Disclosure: *This is a paid advertisement for Boxabl’s Regulation A offering. Please read the offering circular here. This is a message from Boxabl