• The Deep View
  • Posts
  • ⚙️ Exclusive interview: How to bake ethics into AI

⚙️ Exclusive interview: How to bake ethics into AI

Good morning. Donald Trump is now heading back to the White House.

The impact of a Trump administration on artificial intelligence remains somewhat murky at this stage, although less regulation seems likely, something Big Tech is certainly happy about.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

  • 🔭  AI for Good: A timelapse of the universe

  • 🏛️ Big Tech, AI and President Trump 

  • 💻 Deep learning is hitting a wall

  • 🌎 Interview: A responsible AI ecosystem

AI for Good: A timelapse of the universe

Source: Vera C. Rubin Observatory

As we write about often, science often begins with observation. The better the observation, the better the scientific progress. 

The Vera C. Rubin Observatory — a new telescope under construction in Chile — has undertaken a massive observational goal: throughout a 10-year period, the telescope will repeatedly scan the night sky to develop a high-definition time-lapse of our universe. This process will generate about 60 million gigabytes of raw image data. 

The installation will employ a number of advanced computing technologies, including artificial intelligence, in a few different ways. 

  • The most obvious use here is simple data processing. And indeed, the observatory will employ software to process the data coming from the telescope; part of this process will include a nightly comparison to older images to determine what changes have occurred out in space. The software will automatically flag each of these (millions) of changes. 

  • Additionally, the telescope must meet a high bar of image quality; to achieve this, it will employ an Active Optics System — powered by a deep learning model — to correct for alignment errors. 

Why it matters: Rubin’s observations — partially powered by AI — will unlock a whole new wave of scientific understanding about the galaxy, providing a depth of imagery “in more detail than we’ve ever seen,” according to the installation

Turn Speech into Seamless Text

Tired of slow typing and endless edits? Wispr Flow lets you speak naturally and converts your thoughts into perfectly formatted text, saving you hours.

Whether you’re crafting AI prompts in ChatGPT, Cursor, or v0, or simply writing emails and messages, Flow adapts to your style and context, ensuring every word is seamless.

For professionals, students, and tech enthusiasts, Flow is a game-changer. Developers love using Flow to interact with AI assistants faster than typing. Product managers appreciate how it turns rambles into clear ideas. And for anyone juggling busy schedules, Flow’s accuracy and speed give you more time for what matters.

Flow’s advanced voice recognition captures your tone, eliminates mistakes, and even offers features like auto-edits and command mode for enhanced productivity.

Big Tech, AI and President Trump 

Source: Unsplash

Early Wednesday morning, the U.S. presidential race was called for former President Donald Trump. 

The major stock indices leaped throughout the day, with Tesla notably rising nearly 15%. Trump was congratulated on his victory by a slew of Big Tech executives, including OpenAI CEO Sam Altman, Apple CEO Tim Cook, Amazon CEO Andy Jassy, Meta CEO Mark Zuckerberg, Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and Amazon founder Jeff Bezos.

Wedbush analyst Dan Ives — who said the day before that Trump’s proposed tariffs and stance on China could negatively impact the AI industry — said in a note Wednesday that these two elements will need to play out over the coming year for their full impact to be realized. 

  • Ives said he expects a strong focus on AI and Big Tech “out of the gates” from Trump’s forthcoming administration: “We would expect significant AI initiatives from the Beltway within the U.S. that would be a benefit for Microsoft, Amazon, Google and other tech players.”

  • Ives said the big winners here are Tesla and its CEO, Elon Musk, despite saying that a Trump presidency will be an overall negative for the electric vehicle industry. 

The key, to this, according to Ives, has to do with regulation. Musk — a massive funder and supporter of Trump — has spent months railing against regulatory efforts across his lineup of companies and industries, including self-driving and artificial intelligence. Trump, meanwhile, has promised to repeal President Joe Biden’s executive order on AI; beyond that, Trump has not talked much about AI, but a light-touch regulatory approach is expected.

Right now, there are a lot of unknowns. I’ll be exploring this new regulatory dynamic in greater depth next week.

Transform Testing with AI: Meet Tricentis Copilots

Supercharge your QA testing productivity with Tricentis Copilots, generative AI-powered assistants.

Tricentis Copilot solutions are AI-powered, intelligent assistants that help QA and development teams test applications, processes, and data more efficiently and effectively.

  • A Trump Win Could Unleash Dangerous AI (Wired).

  • Tesla jumps 13% as Trump-backer Musk seen benefiting from White House win (CNBC).

  • What Trump’s victory means for the world (Semafor).

  • Did OpenAI just spend more than $10 million on a URL? (The Verge).

  • Malaysia’s new data centers create thousands of jobs — and worries about power and water shortages (Rest of World).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

  • Pythia: An AI tool designed to assist founders.

  • Robofy AI: AI Chatbot builder for your website.

Deep learning is hitting a wall

Source: z16z

Cognitive scientist Gary Marcus has been saying for years that deep learning — a subset of machine learning that uses layers of neural networks to simulate artificial cognition — is hitting a wall. 

Scaling compute and training data, according to Marcus, does not represent a magical pathway to achieving more powerful systems. Indeed, he has said that systems have been reaching a plateau for a while now.

  • A significant aspect of this plateau is that hallucinations remain a component of the architecture, regardless of the scale of compute and training data, and regardless of mitigation efforts.

  • There is no evidence that they will be going away anytime soon. And no one really knows how to achieve an AI that does not hallucinate.

It’s a perspective that venture capitalist Marc Andreessen seems to be coming around to. 

Appearing on an a16z podcast episode, Andreessen said that two years ago, OpenAI’s LLMs were far ahead of everyone else’s technology. But today, the LLMs from all the major labs are pretty much at the same level.

“They’re kind of hitting the same ceiling on capabilities,” he said. “There’s lots of smart people in the industry working to break through those ceilings. But sitting here today, if you just looked at the data, over time what you would say is there’s at least a local topping out of capabilities that’s happening.”

Andreessen has previously railed against the idea of regulating the industry and has claimed that AI will usher in a utopia here on Earth. He has invested billions of dollars into the ecosystem. 

Exclusive interview: How to bake ethics into AI

Source: Sama

The AI ecosystem is far wider and more complex than the chatbot interfaces end-users engage with. 

The process begins with data collection and curation at an internet-wide scale. Once the data is collected, it has to be cleaned and processed by human data annotators. This is a vital step in the process of either training or fine-tuning a model for a few reasons, one of which is that this step helps reduce the quantity of harmful data that likely got absorbed during these mass web scrapes. 

But the process also results in better model performance. 

The problem is that this work is often outsourced to low-paid workers in Africa, where conditions are known to be abusive. A June letter from a group of Kenyan data annotators working for OpenAI and Meta said that the work, which can involve watching abusive and violent content for less than $2 per hour, is “mentally and emotionally draining.” 

“U.S. Big Tech companies are systemically abusing and exploiting African workers,” the letter reads. 

The ethical impacts of generative AI are pervasive throughout the ecosystem, prevalent and poignant even before a system is made accessible to the world. 

Sama, a data annotation company founded in 2008, is working to inject responsibility into the pipeline, beginning with the annotation process. 

  • A certified B Corp, the company has two overriding goals: one, to advance artificial intelligence technology through ethical, accurate data pipelines, and two, to eliminate poverty. 

  • According to the company, it has so far lifted more than 65,000 people out of poverty by employing people in the field of data annotation.

Its focus on helping bring about responsible AI has everything to do with the pipeline; making sure that a given model’s data is auditable, traceable and ethically sourced and curated. 

“We've really taken that security and responsible AI element, it's really core to our DNA,” Duncan Curtis, SVP of AI product and technology at Sama, told me. 

What this more thoughtful approach translates to, on the ground, is an attempt to avoid what Curtis called the race to the bottom. Many of the practices in the data annotation process that are abusive and have caused harm are “encouraged by the race to the bottom. So the cheapest, the fastest, whatever it takes to get there. And I get it, it's a technological arms race.”

But he said that at this point, where most of Big Tech has leveled out at a relatively similar level, there is an increasing focus and understanding around the idea that doing things the right way — slowly, thoughtfully, ethically — is better, not just morally, but also for the end product. 

  • “That's how we beat the race to the bottom, is that you get better quality data,” Curtis said. 

  • “It ends up being faster because you get the right data for you. We found, and our clients have found, that by working with us and doing the right thing, they also end up benefiting with better models and faster time to market for their models as well.” 

There is, however, a complexity here; Sama is on a mission to do social good by helping developers build generative AI models, which in some cases may be used to replace workers, something that could go against its anti-poverty mission. But so far, with the generative tools Sama has introduced into its own workflow, that scenario isn’t panning out. 

Curtis said that, because of the rapid growth of the industry, each worker can now do more work, but the “number of people we need is continuing to grow. So we're not shuttering our workforce in order to be like, ‘Oh, we put out a new AI thing. Let's fire 30% of the people.’ We continue to outpace that growth.”

He added that the “complexity of work continues to level up, and our ability to train our people has kept getting better and better so that we can meet this need at a more global scale.” 

Last year, the firm came under fire for taking a Facebook moderation job that exposed its employees to highly distressing images. CEO Wendy Gonzalez told the BBC she regretted taking the job, and that going forward, Sama would not involve itself with the moderation of harmful content or the training of AI models that support police surveillance or weapons of mass destruction.

"Africa needs a seat at the table when it comes to the development of AI,” Gonzalez said. “We don't want to continue to reinforce biases. We need to have people from all places in the world who are helping build this global technology."

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 1 (Left):

  • “Steam was unnatural on the second image.”

Selected Image 2 (Right):

  • “Hmm, interesting! I got it wrong, should've gone with my initial gut feeling. I did originally think image 1 was real (details on green veggies and upside down spoon ladle seemed legit!) but then I saw the level of liquid in the bowl was different on the left vs the right side of the bowl interior, and presumed that was an AI error. Oops!”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on AI in the law:

27% of you think it’s no good, 22% think it could be great, but only in highly specific applications and 17% think it’s great for everything. The rest aren’t really sure.

Specific applications:

  • “Defend yourself using AI in court, but still need a judge and jury.”

Specific applications:

  • “Standard work like contracts, wills, divorces, etc. should be done by AI. Litigation, lawsuits, etc. should be left to the real lawyers - but at 50% of what they charge today!”

How important is an ethical AI pipeline to you?

Login or Subscribe to participate in polls.