- The Deep View
- Posts
- ⚙️ Exclusive Interview: Founder of Intuition Robotics talks AI companionship
⚙️ Exclusive Interview: Founder of Intuition Robotics talks AI companionship
Good morning. I spoke with the CEO and co-founder of Intuition Robotics about the philosophy of AI companionship.
It’s a fraught topic and a fascinating conversation.
I hope you enjoy it.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
AI for Good: Supercharging the shift to a circular economy
Source: UNEP
We live in what has been referred to as a linear economy. In other words, we buy stuff, then we throw it out to buy more stuff. And so on, and so on.
The problem with that is both our space and our resources are finite; more than 100 billion tons of resources enter the global economy each year, according to the World Economic Forum. Under 10% of that gets recycled and reused. Meanwhile, e-waste numbers keep rising; in 2022, we produced 62 million tons of e-waste, an 82% increase from 2010 (the business of AI is actively contributing to the rise in e-waste).
“We would need 1.5 Earths to sustainably support our current resource use,” according to WEF.
This brings me to the idea of a circular economy, which, as the name suggests, involves more focused reuse efforts and is intent on more sustainable, long-term consumption. Such a system, according to WEF, could be fantastic for economies as well as the health of people and the planet.
According to a 2023 report from the United Nations Environment Program (UNEP), the Metabolic Institute and One Planet Network, AI can help get us there.
The details: The report suggests three overall stages — rethinking product design/manufacturing, extending the lifespan of products and the effective reuse of material.
AI, the report said, can be used to help with each of these stages by analyzing data and highlighting the kinds of trends and patterns that can enhance efficiency.
For example, “in a circular economy, AI could drastically minimize the unnecessary plastic present in 30% of our packaging” and dramatically reduce food waste.
Why it matters: What we’re talking about is the rapid analysis of several enormous data sources at once, tied together with trend prediction. “Artificial Intelligence can gather, analyze and interpret complex environmental data and information to understand issues during the design and manufacturing process and prioritize action, such as refusing the use of certain products and materials.”
Build AI Solutions Without Coding
Did you know there's a simpler way to use GenAI? You can now create AI solutions without writing a single line of code—StackAI is an no-code platform designed by MIT PhDs, for operations and enterprise teams. With StackAI, you can effortlessly build AI applications for:
✅ Drafting financial reports
✅ Redlining contracts
✅ Responding to RFPs
✅ Training new employees
And many more use cases!
StackAI allows you to:
✔️Build end-to-end, production-ready AI assistants using a drag-and-drop interface
✔️ Ensure compliance with SOC 2 Type II, HIPAA, GDPR, and on-premise deployment
✔️ Save up to 80% on your AI journey compared to using an in-house AI team
✔️ Increase your time-to-market for your AI journey (weeks instead of months)
Transforming your business with AI shouldn't be difficult—Stack AI handles the heavy lifting for you.
Beeble AI, a VFX startup, announced $4.75 million in funding.
Aidentified, an AI prospecting solution for financial advisors, raised $12.5 million in Series B funding.
Anthropic launches $100 million fund with Menlo Ventures (CNBC).
Bhutan’s first AI startup is seven college kids in a dorm (Rest of World).
Google and Microsoft are helping Chinese firms skirt the ban on Nvidia chips (The Information).
The ‘godmother of AI’ has a new startup already worth $1 billion (The Verge).
Regrow hair in as few as 3-6 months with Hims’ range of treatments. Restore your hairline with Hims today.*
Apple, Nvidia and others used data from 173,536 YouTube videos to train AI models
Source: Unsplash
As the issue of copyright infringement pertaining to the training of AI models continues to play out in the courts, the industry has largely worked to maintain a smokescreen around the specifics of the data at play.
Already, there are numerous copyright infringement lawsuits from digital artists, photographers, media companies, newspapers and novelists all alleging roughly the same thing: the scraping of content without the original creator’s permission, knowledge or compensation, with the purpose of creating a monetized product, is morally wrong and legally in violation of clear copyright law.
A recent Proof News investigation found another creative frontier that has been scraped and leveraged (for free) to train AI videos: YouTube.
The details: The investigation found that video transcripts from 173,536 YouTube videos (across 48,000 channels) were scraped by dataset creator EleutherAI without the permission, compensation or knowledge of the channel owners (and possibly in violation of YouTube’s terms of service).
This dataset, called YouTube Subtitles, was then made a part of Eleuther’s freely accessible “Pile,” which companies including Apple, Anthropic, Nvidia, Bloomberg and Salesforce have used to train their AI models.
Marques Brownlee — whose videos were included in the dataset — said in response that this is “going to be an evolving problem for a long time.” He added that he pays a service “(by the minute) for more accurate transcriptions of my own videos, which I then upload to YouTube's back-end. So companies that scrape transcripts are stealing paid work in more than one way. Not great.”
AI researcher Chomba Bupe said in response that “these companies should be paying for the data they are using to train their models on. Greed is one hell of a drug, it will be humanity’s downfall.”
Invest smarter with Public and AI
If you're looking to leverage AI in your investment strategy, you need to check out Public. The all-in-one investing platform allows you to build a portfolio of stocks, options, bonds, crypto, and more, while incorporating the latest AI technology to help you make more informed investment decisions.
Last year, Public introduced Alpha, a proprietary AI that transforms how you interact with your portfolio. Alpha delivers real-time financial insights on the assets you care about, including alerts on unusual market movements and earnings call summaries. This proactive approach ensures you're always informed about significant events affecting your investments.
You can also swipe down in the Public app to access Alpha's reactive chat feature. Ask any question about any asset, and Alpha leverages its natural language processing capabilities to provide insightful answers with precise fundamental data.
Join Public, and build your primary portfolio with AI-powered insights and analysis.
Exclusive Interview: Founder of Intuition Robotics talks AI companionship
Source: Intuition Robotics
Much of the technology we interact with today involves some form of enhanced communication. The internet — and everything it spawned — had the clear effect of bringing the world together (and making everyone in it more isolated).
More recent advancements in artificial intelligence, meanwhile, are beginning to lead to an exaggeration of that communicative tech. I am talking, of course, about AI companionship.
Virtual AI girlfriends, for example, no longer exist on the fringe.
There are dozens of apps — some with millions of users — that offer personalized connection, companionship and illusory romance.
The idea of AI companionship has taken a few other forms, as well. Intuition Robotics sells a device called ElliQ that is marketed as an “AI care companion robot designed to empower independence and promote healthy living.”
The target audience for ElliQ, though, is not lonely young men. The device is an “electric pal” specifically for the elderly, designed to check in on users, remind them to take their medicine and interact with them in a regular, intuitive way.
Whatever the form, AI companionship is becoming more of a real thing. And the ethical and psychological impacts of it are, well, somewhat undecided. Studies have found that, among younger people, less time on social media reduces loneliness; for adults, more time on social media reduces loneliness (think someone who lives across the country video calling with their grandkids).
The idea of an AI companion, according to some psychologists, is not bad, as it can provide personalized 24/7 support to anyone, with Dr. Mike Brooks specifically saying: “For the elderly in retirement homes or those isolated due to chronic illness, an AI chatbot can offer the gift of constant, caring companionship.”
At the same time, there are real risks here, risks that take a variety of forms.
One of these involves an “over-dependence” on AI partners that could “hinder the development and maintenance of authentic human connections, as well as diminish individuals’ capacity for empathy, vulnerability and the skills necessary to navigate the complexities of human relationships,” according to AI expert and ethicist Nell Watson.
Liberty Vittert, a professor of data science, wrote that AI companionship “is enabling a generation of lonely men to stay lonely and childless, which will have devastating effects on the U.S. economy in less than a decade.”
There is also the danger of perpetuating the illusion of “perfect” relationships, which Tara Hunter, CEO of Full Stop Australia, called “really frightening.”
“Given what we know already that the drivers of gender-based violence are those ingrained cultural beliefs that men can control women, that is really problematic,” she told the Guardian.
The tricky thing is that no one really knows what the impact of prolonged AI companionship will be. There’s no real data on that. We don’t know how it will impact human psychology and sociology. There might be some good things. There will be some bad things.
I sat down with Dor Skuler, the CEO and co-founder of Intuition Robotics, to talk about ElliQ and the philosophy of AI companionship. Since my own impressions on the idea of AI companionship evolved throughout our conversation, I thought it best to recreate some of it for you.
The following has been lightly edited for clarity and length.
The Deep View: Much of the AI sector is deeply philosophical. And the place you’re playing at, where the target is older people to help with loneliness, that’s really interesting to me, because there is an element to this that’s almost dystopian, of a digital friend because there’s no one else around. I'm wondering how you landed at that target group; why was that the focus?
Dor Skuler: That was the goal and the technology came after. It is a bit dystopian, where modern society is. It used to be that the older people in our lives, they were the wise men and women of the village. They were the people you would go to for advice; they would be the counsel of the tribe. Now, we are in this very, very small group of just our immediate family. And when we age, we're essentially kind of thrown to the wayside to a certain extent. And that's not how I want to age and it's not how I want to see my parents aging.
Unfortunately, we can't snap our fingers and surround our loved ones with people. Going to see Mom or Dad once a week for a couple of hours — which I wish everybody would do — even if you do that you still have all the rest of the hours of the week that they're by themselves, and it's especially profound once you lose a spouse.
We felt that in the times when the person is alone, a digital friend is better than the alternative. But it has to be done right and it can't be a flash in the pan of some game that gets old after two hours.
The Deep View: With language models, what we’re seeing is the illusion of intelligence rather than the real thing. With something like this, where the target is elderly loneliness, does it matter? Is this the point where the illusion becomes good enough?
Dor Skuler: To a certain extent, we have needs and this product helps meet those needs, and whether they’re delivered by a carbon machine or digital machine, I'm not sure it matters. But how matters.
So we made sure from the get-go that ElliQ never pretends to be human. She's always authentically an AI.
We felt like as long as we were authentically an AI, then humans can decide if they want to build a relationship with this AI knowing exactly what it is and what it isn't. Just like we know how to do that with pets. People that have dogs, dogs take a really important part of their life. They have a real relationship with them, but they never confuse that with a real person. That's what we try to do here and that's what we're seeing from our customers; they see ElliQ as something in between a device and a human, but not a human and not a device.
The Deep View: We’re seeing a broad push toward conversational assistants; I’m wondering what you think about these kinds of synthetic companions around younger generations, of having kids grow up with something like this?
Dor Skuler: I have very little experience with that. I can tell you that my kids have a relationship with ElliQ. I don't know what the effects are on children and I think at a higher level — these are personal thoughts, not the company view — I think younger people experience loneliness while being next to other people. It's something about social media, the devices, the screens … you could be right next to somebody else and still feel very lonely, whereas the population we serve are lacking human contact.
And that to a certain extent is a different thing to solve. When you think about it, the harshest punishment we have in modern society is sending a prisoner to solitary confinement. And that's what we subject the people we love most in the world to on a regular basis. It just doesn't make much sense.
But to a certain extent, it might be easier to solve that problem than it is to get three kids that are in the same room not to feel lonely.
Skuler added that user data belongs to the user, whether or not someone else is paying for it. And the company explicitly asks every time before sharing data (which it does with medical services). He said that Intuition never sells user data: “We're profitable enough on a per unit economic basis to not need to do any of those things.”
I don’t know how I feel about this. I don’t have an essay for you on this one. I maintain that it’s a shame that there is no other option. And I personally find the idea of building a relationship with an AI to be disturbing, and like much of the weirdness in this field, anti-human.
But if it helps people who don’t have another option, I certainly don’t have a problem with that.
I just worry what it will usher in. I worry about a world where kids grow up around AI companions. I worry what that will do to us.
Which image is real? |
A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Your view on AI as a crutch in education:
Nearly half of you said that, so long as it is approached thoughtfully, it can be used as a tool rather than a crutch. And tools are good.
Use it thoughtfully:
“It helps to do things much quicker, one should use it as a tool to assist, but never to replace humans”
Use it thoughtfully:
“From other reading, It seems like empathy is key to learning and therefore human educators need to teach how to use the tools and not rely on the tools to do the educating.”
What do you think about AI companions? Would you use one? |
*Public disclosure: All investing involves the risk of loss, including loss of principal. Brokerage services for US-listed, registered securities, options and bonds in a self-directed account are offered by Public Investing, Inc., member FINRA & SIPC. Cryptocurrency trading services are offered by Bakkt Crypto Solutions, LLC (NMLS ID 1828849), which is licensed to engage in virtual currency business activity by the NYSDFS. Cryptocurrency is highly speculative, involves a high degree of risk, and has the potential for loss of the entire amount of an investment. Cryptocurrency holdings are not protected by the FDIC or SIPC.
Alpha is an experiment brought to you by Public Holdings, Inc. (“Public”). Alpha is an AI research tool powered by GPT-4, a generative large language model. Alpha is experimental technology and may give inaccurate or inappropriate responses. Output from Alpha should not be construed as investment research or recommendations, and should not serve as the basis for any investment decision. All Alpha output is provided “as is.” Public makes no representations or warranties with respect to the accuracy, completeness, quality, timeliness, or any other characteristic of such output. Your use of Alpha output is at your sole risk. Please independently evaluate and verify the accuracy of any such output for your own use case.