⚙️ The new hardware that will transform AI

Good morning. LinkedIn, it turns out, has been training its AI on its users’ data — according to 404 Media, the platform plans to inform users about the practice, and its automatic opt-in to that practice, in a future update to its terms of service …

To turn it off, go to Settings, then Data Privacy.

— Ian Krietzberg, Editor-in-Chief, The Deep View

In today’s newsletter:

Source: The Deep View

A few weeks ago, I finally got the chance to (virtually) meet AI researcher and cognitive scientist Dr. Gary Marcus. We spoke for nearly an hour about his new book, “Taming Silicon Valley,” and about the technical and philosophical complexities of AI. 

I’ve had the wonderful pleasure and distinct privilege of speaking with hundreds of experts, professors, engineers and executives about artificial intelligence; but to condense these conversations into articles or newsletter posts necessarily means that a lot of information gets lost.

I’m really excited to say that, in this instance, nothing was cut — we recorded the full conversation for you to consume at your leisure. Check it out here (on YouTube) or here (for podcasts).

We plan on doing a lot more of this, bringing exclusive conversations right to your door, so you can hear directly from the experts. 

A special edition based on our conversation will be coming to you this weekend.

New AI agents are emerging daily, but how many actually work? Numbers Station releases an exclusive look at how they deliver analytics agents to their customers. Their agents work in unison so users can query across different data sources.

The fleet of agents:

  • The Search Agent – unifies fragmented analytics assets (dashboards, schemas, warehouses, email) to find one that answers the question.

  • The Diagnostic Agent – goes beyond the capabilities of existing dashboards and initiates a dynamic drill down to find the root cause of a trend.

  • The Next Best Action Agent – recommends an action for the user to take based on its findings.

  • Finally, a Tool Agent – implements the suggested action by plugging into external tools (e.g. a GDrive agent to create and share a presentation with other teams), so you don’t have to.

Skip the waitlist and request a demo to see all the agents in action.

California governor signs some AI legislation 

Source: Governor Gavin Newsom

California Governor Gavin Newsom this week signed five AI-related bills into law. But he has yet to make a decision about SB 1047.

The details: The governor signed three bills that would fight deepfake election content, requiring disclosures in advertisements and forcing social media platforms to disallow or label synthetically generated election-related content posted in the run-up to an election. 

  • “It’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation,” Newsom said. 

  • Newsom signed another two bills that would protect the voices and likenesses of performers and actors from synthetic replication without their permission. 

SB 1047: But while appearing at a Salesforce conference on Tuesday, Newsom signaled his reluctance to sign SB 1047, which has been heavily criticized (and lobbied against) by the major players in Big Tech and venture capital.

He said the bill could have a “chilling effect” on the open-source AI community, though he added that he “can’t solve for everything,” indicating his decision is still in flux. He has until the end of the month to sign the bill.

AI will run 80% of project management by 2030, per Gartner

But you don’t have to wait till then  backed by Zoom, Atlassian, and Y Combinator, Spinach AI makes that a reality today.

  • Help run your meeting on Google Meet, Zoom, or Microsoft Teams

  • Summarize and share notes and action items in email, Slack, Google Docs, Confluence, or Notion

  • Help you update tasks and tickets in Jira, Asana, Trello, ClickUp, Monday or Linear

Which means you can spend more time building and growing the business.

  • Microsoft, BlackRock, MGX launch $100 billion AI fund (Global Infrastructure Partners).

  • LinkedIn is already training its AI on your data. It’ll update its terms of service later (404 Media).

  • Fed slashes interest rates by a half point, an aggressive start to its first easing campaign in four years (CNBC).

  • Google offered to sell part of ad tech business, not enough for EU publishers, sources say (Reuters).

  • Lionsgate signs deal to train AI model on its movies and shows (The Verge).

If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.

‘Don’t trust Big Tech’: Government holds another hearing on AI

Source: Senate Judiciary Committee

As federal regulatory efforts in the U.S. remain cautious to the point of nonbeing, the Senate Judiciary Committee on Tuesday held another hearing on AI oversight. You can watch the hearing and read the testimony here

The hearing focused on the “insider” perspective, featuring testimony from former OpenAI board member Helen Toner, Hugging Face’s chief ethics scientist Margaret Mitchell, former OpenAI engineer-turned-whistleblower William Saunders and scholar David Evan Harris. 

The details: Sen. Richard Blumenthal (D-Conn.) said in his opening statement that the thing “we should learn from social media: don’t trust Big Tech … For years, they said ‘trust us.’ We learned we can’t, and still protect our children and others.”

  • Referencing Big Tech’s assertion that it wants regulation, just “‘not that regulation,’” Blumenthal said that some form of regulation is vital. It’s an effort, he said, that has been slowed down in the face of Big Tech’s “armies of lobbyists and lawyers.” 

  • Blumenthal said the goal is to achieve the “promise” of AI through “light touch” regulation: “We won’t all agree on what a light touch is, but if we are honest with each other, I think we can develop some standards — as well as enforcement mechanisms — that make sure that we impose accountability, as well as criteria for judging whether or not a particular form of AI is safe or effective, just as we would impose that standard on new drug development.”

Both Toner and Mitchell said that regulation does not have to “be in tension” with innovation, something many researchers have told me. Mitchell said that regulation can “be very useful.” 

Broad-based federal legislative efforts continue to fight an uphill battle, despite the public’s overwhelming support for such measures. It is not clear when, or how, the U.S. will regulate AI. Efforts at the state level remain inconsistent. 

“I resigned from OpenAI because I lost faith that by themselves they will make responsible decisions about AGI,” Saunders said in his statement. “If any organization builds technology that imposes significant risks on everyone, the public and the scientific community must be involved in deciding how to avoid or minimize those risks.”

Exclusive Interview: The new hardware that will transform AI 

Source: Unsplash

In an environment where data breaches are becoming more and more frequent, concerns about cybersecurity have been steadily ramping up; a recent study found that 82% of Americans are concerned about the overall security of their personal data online. 

This has only been compounded by the introduction and integration of generative artificial intelligence, which one cybersecurity expert told me is the “most vulnerable technology ever to be deployed in production systems.” 

  • When it comes to the role (or not) of genAI in the enterprise, those concerns about data security are even more potent, acting, for some, as an obstacle to adoption that has yet to be overcome. 

  • A report by cybersecurity company HiddenLayer found that nearly 80% of companies have reported breaches to their AI models in the past year; the rest couldn’t be sure if their models had been attacked. 

In very basic terms, one of the issues with data security that’s going on here is that, even if the data is encrypted at rest — which most are — it needs to be decrypted when it's called into use. That moment of decryption opens up vulnerabilities that have been exploited. 

In answer to these vulnerabilities, there is a technological security solution called Fully Homomorphic Encryption (FHE), which allows people to compute on encrypted data, without ever needing to decrypt that data. 

Niobium is one of a few startups working on accelerating a commercial iteration of FHE, which they believe will usher in an era of confidential computing. 

The problem: While FHE is a promising technology, as it stands today, it is practically useless, Niobium’s Chief Product Officer, Jorge Myszne, told me. 

  • “The drawback of FHE is that it’s very slow, between 100,000 and a million times slower than” standard compute, he said, which makes it “almost unusable.” Since its founding in 2021, the company has been developing a silicon CPU chip that would accelerate FHE, making it functionally usable.

  • A regular CPU chip is not well-suited for running FHE — similar to how regular CPUs aren’t ideal for running genAI. Niobium’s chip, however, is specifically designed to handle FHE. 

The impact on AI: One of the major use cases Niobium is exploring involves generative AI, specifically, ensuring the idea of “private AI.” 

“So think about ChatGPT where any query that you do doesn't end up on the internet, leaking data everywhere, or the model doesn’t learn from whatever you are uploading,” Myszne said. 

The GPUs that power AI consume far more energy than their smaller CPU counterparts; while he said it’s too soon to share specifics, Myszne said that he knows “what the numbers are for our chip. I would say it's way below a GPU,” but he added that it’s not necessarily a fair comparison. 

Looking ahead: The first version of Niobium’s chip is just finishing the fabrication process; once the company completes tests, it will grant access to a handful of early adopters that are keen to test the solution. 

  • Myszne expects this first iteration to be around 100 times slower than standard compute, which would make some applications accessible. He added that “in the next five years, we will get to parity. I'm not saying we'll get exactly the same performance because there are no free meals. You want encryption, you will pay for that.”

  • The goal, instead, is to make the gap small enough that there’s little compelling reason not to use FHE. 

Earlier this year, the company secured $5.5 million in venture funding.

Which image is real?

Login or Subscribe to participate in polls.

🤔 Your thought process:

Selected Image 2 (Left):

  • “The image 1 was too perfect. The house did not make sense in terms of building standards.”

Selected Image 1 (Right):

  • “Fooled by the lack of windows in Image 2. I figured you’d always want a window next to a cabin door. I guess not! 🤷‍♂️”

💭 A poll before you go

Thanks for reading today’s edition of The Deep View!

We’ll see you in the next one.

Here’s your view on OpenAI’s new safety board:

A third of you said voluntary commitments aren’t enough, and a third of you said you didn’t trust OpenAI before, and you don’t trust them now.

Around 15% said they feel safer already, and another 15% said the move wasn’t necessary to begin with.

I feel safer:

  • “SafER, yes, but (the) proof is in the action. Let’s see what they actually come up with.”

Do you care enough about privacy + private computing to explore FHE?

Login or Subscribe to participate in polls.