- The Deep View
- Posts
- ⚙️ Interview: A different way to engage with theoretical science
⚙️ Interview: A different way to engage with theoretical science
Good morning, and happy Friday.
It’s been a long week. Hope you all have a lovely weekend.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🌌 AI for Good: The foundations of our galaxy
💰 AI sector continues to lead global venture funding
🚨 Anthropic partners with Palantir to bring Claude to US Defense Ops
🔬 Interview: A different way to engage with theoretical science
AI for Good: The foundations of our galaxy
Source: Unsplash
A core focus of astronomical research involves developing an understanding of the origins of our galaxy.
Astronomers believe that the Milky Way as we know it today was preceded by something called the proto-galaxy. But to understand how the chaos of the proto-galaxy evolved into the Milky Way, scientists would need to track down the original stars that were around back then.
It’s a task they’ve been working on for around 25 years, to no avail.
But in 2022, a team of scientists took a new dataset from the European Space Agency’s Gaia telescope and ran the data through specially designed machine learning algorithms.
Trained to detect the metallic content of stars — which would signify their age — the algorithm identified a set of 18,000 ancient, metal-poor stars.
The researchers called the results “preliminary,” though they did succeed in fleshing out a clearer picture of the proto-Galactic stars that still line our galaxy.
Industry Leaders Discuss How AI is Transforming Business ROI
On November 14, join 15k+ professionals and leaders from top AI-first companies like Moderna and S&P Global to learn how they’re driving AI ROI. Register for free here.
Here’s a peek at the speaker lineup:
Brice Challamel, Moderna’s VP of AI & Product Platforms will share insights about Moderna’s 80% internal adoption of AI and the gains they’ve seen
Section CEO Greg Shove will unveil never-before released research about the state of AI proficiency in the workforce
General Catalyst Managing Director Marc Bhargava will discuss his approach to AI-driven investments
Whether you're already implementing AI or just curious about what’s working in the field, this is the place to be on November 14. You can join for one session or stay all day.
AI sector continues to lead global venture funding
Source: Crunchbase
Since 2022, artificial intelligence has been a core focus for venture capitalists around the world. It is a theme that has yet to change.
In October, according to Crunchbase data, global venture funding hit $32 billion, making last month the largest month for venture funding for the year.
The details: As it has been for months, now, the surge was driven by ongoing interest in artificial intelligence.
The AI sector attracted $12.2 billion in funding in October, nearly 40% of all venture funding for the month.
It caps off a series of slow months for the sector in quite a big way; after pulling in $11.2 billion in funding in July, the sector drew $4.4 billion in August and $4.2 billion in September.
The surge was driven in part by OpenAI’s massive $6.6 billion funding round.
Pipedrive is one of the best CRMs out there. Plus, their AI-powered sales assistant sharpens sales teams, significantly boosting performance.
Police freak out at iPhones mysteriously rebooting themselves, locking cops out (404 Media).
Jeff Bezos says he’s a climate guy — why is he kissing the ring? (The Verge).
Powell says he will not resign if Trump asks as Federal Reserve cuts interest rates (Semafor).
Two OpenAI business partners each discuss $2 billion valuation (The Information).
What Trump’s win could mean for student loan forgiveness (CNBC).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
Top-Quality Talent, 70% Cost Savings—Meet Athyna
Looking to hire top-tier talent without breaking the bank? Athyna combines high quality with cost-effectiveness, offering the best in LATAM tech talent in a fast, reliable, and AI-driven way.
Save yourself the hassle and start hiring smarter:
Access vetted, top talents from LATAM in just 5 days
No upfront costs until you find the perfect match
Save up to 70% on salaries
Hire with confidence. Scale your team with Athyna today!
Anthropic partners with Palantir to bring Claude to US Defense Ops
Source: Unsplash
AI startup Anthropic on Thursday unveiled a new partnership with cybersecurity firm Palantir and Amazon Web Services (AWS) to grant U.S. intelligence and defense agencies access to Anthropic’s Claude 3 and 3.5 family of generative AI models.
The details: The partnership will allow Anthropic’s Claude models to become integrated within Palantir’s suite of technologies.
According to a statement, government officials will use the models for aid in assessing “vast amounts of complex data rapidly, elevating data-driven insights, identifying patterns and trends more effectively, streamlining document review and preparation and helping U.S. officials to make more informed decisions in time-sensitive situations while preserving their decision-making authorities.”
Palantir’s CTO, Shyam Sankar, said in a statement that the partnership provides “U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions.”
The context (and the risk): There is a lot that is left unclear about this partnership, how Anthropic’s tech will be used by the government, how it will be secured, how it will be monitored and how certain inherent AI flaws — hallucinations and biases — will be addressed or mitigated.
These two components make the deployment of generative AI in highly critical situations — governmental and military use are top of mind, here — highly challenging, and potentially damaging.
Reducing the human element of high-stakes decision-making processes in everything from managing released convicts to processing judicial data to the details of making war is, quite simply, a highly risky endeavor, full of ethical quandaries and unpredictable problems.
Palantir recently reported earnings of 10 cents per share on revenue of $726 million, “driven by unrelenting AI demand.” After surging some 20% following the report, which featured strong full-year guidance, the stock hit a new 52-week high on Thursday.
Anthropic’s mission is to “help promote safe and reliable AI.”
Interview: A different way to engage with theoretical science
Source: Created with AI by The Deep View
True artificial intelligence is fundamentally a theoretical technology. We have language models of varying sizes, we have generative AI and we have algorithmic automation, but we have not yet achieved the sort of genuine capabilities that would live up to the term “AI.”
What I’m talking about might sound like artificial general intelligence AGI, and indeed, maybe it is. We don’t have any agreed-upon definitions of what AGI might be, other than that it would be vaguely as intelligent as a human. And while scientists debate whether AGI might ever be possible, many of the major labs are trying very hard to build it.
OpenAI’s mission, for example, is to “ensure that artificial general intelligence benefits all of humanity.” Then there’s Safe Superintelligence Inc., started by OpenAI founder Ilya Sutskever, whose mission is to develop “one product: a safe superintelligence.”
Google DeepMind is working on it, as is Meta; Anthropic is working on “transformative AI” (again, definitions are loose, here).
All told, billions of dollars are going into an unsupervised effort to make this hypothetical technology real. And American voters, according to the Artificial Intelligence Policy Institute (AIPI), overwhelmingly want the tech industry’s efforts here to be restricted by government oversight.
The thing about a hypothetical technology, meanwhile, is that it can feed a variety of narratives. Some say powerful AI, or AGI, or artificial superintelligence, could cause the destruction of the human race; some say it will bring about a utopia.
For Elizabeth Cox, the answer is complicated but the question is clear: should we develop AGI in the first place?
Cox, who created the Demon of Reason series for TedEd, recently founded an indie production company called Should We Studio. The studio’s first project — an upcoming five-episode series of animated shorts that grapple with the societal implications of new technologies — is called Ada. Each episode deals with two possibilities: Ada’s reality and the future that she envisions.
In her work leading up to the launch of the studio, Cox became “interested in the ‘should we’ question, not just “can we make this a reality, but is this a good idea? And especially not answering that with a ‘yes’ or ‘no,’ but exploring all the possible implications.”
This, she told me, is important, since reducing technology to pros and cons misses the point: “I think a lot of times, the pros are the cons. What makes the technology really promising is often also what makes it scary.”
Her premise for her superintelligent AI episode was a “boring apocalypse … where everyone has good intentions, the AI is being used for mundane purposes and it still goes completely off the rails.”
She wasn’t going for an optimistic or pessimistic slant with Ada; she just wanted to engage with the questions at hand, explore the philosophy of societal implications through a potential, fictional outcome.
When it comes to the “should we” of superintelligence, Cox said that her answer is “maybe.”
She said that it remains “abstract” how an AGI could — positively or negatively — change the world. “I don’t know if we should do that … my emotions say no. My thoughts say maybe.”
There’s a philosophy that’s inherent to the development and deployment of AI.
What could the societal impacts be of AI therapists and AI teachers, artificial environments and real — yet imaginary — people? On one level, there are the implications of current technology; the impact of overreliance on an AI tutor that, for instance, hallucinates incorrect information to a whole generation of students seems clear.
But what if we could trust it? What does an artificial society look like? Do we want that? Could an artificial society be good?
It’s a philosophical question that ought to be, at the very least, explored, and hopefully, before developers deploy more powerful systems (if ever they are achieved).
Which image is real? |
🤔 Your thought process:
Selected Image 1 (Left):
“Wine in glass in unreal pic was not level to the table in which the glass was sitting...”
Selected Image 2 (Right):
“In the first image, there are multiple plates being held in front of people in a restaurant and one is held over another. Logistically it seems unlikely. there are also weird angles on the woman's shoulder and the thumb seems truncated in the server's hand.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on the importance of an ethical AI pipeline:
More than half of you think ethical AI pipelines are very important; 22% said it’s not that important and 17% said it’s not important at all.
Very:
“Without an ethical foundation to AI, any wrong data would have the tendency to pollute the entire pool of data used for answers.”
Very:
“Sad to see that only half are concerned about the ethical application of AI. Developers have to recognize that they themselves may someday be at the wrong end of a lack of morality.”
What do you think about Anthropic's partnership with Palantir? |