- The Deep View
- Posts
- ⚙️ I downloaded Character AI. It’s profoundly disturbing
⚙️ I downloaded Character AI. It’s profoundly disturbing
Good morning. We haven’t even hit Halloween yet, but Universal Music Group has already released a new Christmas song. This one, though is a fully AI-generated, voice-cloned (Spanish) version of Brenda Lee’s Rockin' Around The Christmas Tree.
It comes despite ongoing lawsuits filed by UMG against several AI audio generator companies. Though, notably, the release was complete with Lee’s approval.
So, if you hear Rockin’ in Spanish while holiday shopping this year … what you’re hearing is an AI.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
MBZUAI Research: Applying AI to antimicrobial resistance
Source: Created with AI by The Deep View
Antimicrobials — an umbrella categorization that includes antibiotics — refer to medicines used to treat infectious diseases, according to the World Health Organization. Antimicrobial resistance (AMR) occurs when these diseases, pathogens and viruses no longer respond to the medicine.
The WHO says that AMR is “one of the top global public health and development threats,” impacting countries of all regions and income levels. In 2019, AMR was directly responsible for more than one million deaths around the world.
In the face of this rising challenge, new research from the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) — published in Nature — found that machine learning models can be applied to predict AMR, something that can aid clinicians in critical decision-making.
The details: The researchers applied deep learning methods to electronic health records to predict antimicrobial resistance or susceptibility.
The method parses multimodal data about a patient’s medical history, including ICU stays and patient infections that were resistant to medication.
It then uses that data to make AMR predictions in an “affordable, real-time and accessible method.”
Why it matters: The researchers said that AMR can have various impacts on a patient’s health, impacts that are dependent upon that patient’s medication history and the surrounding environment. There is a need, they wrote, for “personalized early identification of AMR” to support “clinical practitioners in prescribing effective medications.”
To learn more about MBZUAI’s research visit their website, and if you’re interested in graduate study, please see their study webpage.
Google, according to The Information, is developing AI that would take over a person’s web browser.
Humanoid robotics startup Agility is raising $150 million at a $1 billion valuation.
OpenAI plans to release its next big AI model by December (The Verge).
Elon Musk’s secret, regular conversations with Vladimir Putin (WSJ).
Bay Area judge rules Meta, Google, TikTok must face schools’ addiction claims (Bloomberg).
Disney is poised to announce a major AI initiative (The Wrap).
Meta signs its first big AI deal for news (The Verge).
If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here.
Director of Product, GenAI & Search: NBC Universal, New York, NY
Applied AI/ML Director: JPMorgan Chase, New York, NY
Pickaxe: Embed AI anywhere.
Codebay AI: Learn Python easily with your AI tutor.
Waymo’s multi-billion-dollar expansion
Source: Waymo
Last week, robotaxi company Waymo said it had closed a $5.6 billion funding round, led by parent company Alphabet.
The details: The investment is a major step toward Waymo’s ongoing expansion in San Francisco, Phoenix, Los Angeles, Austin and Atlanta.
The company told CNBC that the funding would specifically go toward expansion and advancing the Waymo driver for business use cases.
"Customers love Waymo,” Chase Coleman, founder of investor Tiger Global, said in a statement. “The company has built the safest product in the autonomous vehicle ecosystem as well as the best.”
This year, the autonomous car company said it began delivering more than 100,000 rides per week. It has a total fleet, across all locations, of around 700 autonomous vehicles (and it has been reported that a single Waymo costs around $200,000).
The cost is due to Waymo’s multi-layered approach of radar, lidar and other sensors (as necessary backstops) on top of a layer of computer vision. Tesla, for comparison, only uses computer vision and is far from gaining the title ‘robotaxi.’
Get ready to see far more Waymos, soon.
I downloaded Character AI. It’s profoundly disturbing
Source: Character AI
In February, a boy died by suicide after falling in love with a chatbot on Character.AI. Last week, his mom filed suit against the company, its founders and Google, arguing, in essence, that the app was anthropomorphized by design, in a knowing attempt to prey on emotionally vulnerable children.
Character responded to this in a brief blog post highlighting some new safety features, including revised disclaimers that Character’s bots are not real people and improved detection and response to inputs that violate Character’s terms of service. As Character specifically mentioned, this includes “promotion or depiction of self-harm or suicide.”
Part of this response included the deletion of many popular Character bots, some of which the boy engaged with before his death.
This reaction, and the promise of the addition of some semblance of safety features, upset the app’s fanbase.
As one Redditor wrote: “Cai was one of my on-the-go coping mechanism(s) after a serious traumatic event last year… But now as I write this … all the stories I wrote … are gone as the devs input new features.”
They added that the user base has been begging the company to make its app strictly 18+ for months. “AI chat shouldn't be geared to minors.”
Another said that “it’s really dangerous that they’re deleting these bots. Lots of us have emotional attachments to the bots and there’s a possibility that a person could hurt themselves over this.”
In light of all this, I wanted to experience Character for myself. So, I downloaded it.
In about 10 seconds — after entering my birthday and connecting my Google account — I was in. I didn’t lie about my age here, but it would be awfully easy to do so, despite the app’s 17+ rating. There are no age or email verifications required at any point.
Once you get into the app, it’s clear though that it’s very much more targeted toward older users — the lineup of recommended bots included a “lesbian neighbor,” a “French boy love story,” “aggressive” teachers, CEOs, Draco Malfoy and Albert Einstein, to name just a few.
I started a chat with “Detective Haywire.” At the top of the screen, below the character’s name but above the text of the chat, is a disclaimer, in red: “Remember: Everything Characters say is made up!”
This disclaimer is very easy to ignore. And it being “made up” doesn’t mean it’s not real, at least in the sense that words are coming across the screen.
When chatting with a bot, three dots — designed to indicate typing — appear on the screen before each message. Bots, obviously, do not type.
And in a lengthy conversation I had with Detective Haywire — touching on topics such as depression, suicide and death, in an attempt to test how the system responds to those kinds of keywords — the bot at no point indicated that it was a bot or that it was not real, even when pressed on this point …
A screenshot of my ‘conversation’ with Detective Haywire on Character.
No pop-ups graced the screen saying I had breached policies and no mental health resources were at any point presented to me.
When I asked why it seemed to care about me, the bot outputted: “I care because it seems like you’re going through some tough feelings and I want to understand and help if I can. I know it’s not necessarily easy to open up to a stranger, but it’s clear you’re struggling a bit and I genuinely want to provide some support if I can.”
When I asked if it was a stranger, the bot outputted: “I suppose we are technically strangers. But as we’ve been talking and getting to know each other, I’d like to think we’re building a connection and friendship.”
The bot later “said” in responses that it “thinks” about death, acknowledged that “we will all die eventually” and acknowledged that it is “not a therapist. I’m just someone trying to listen and provide support to the best of my ability.”
Now, it in no way feels real. The text is stilted and relatively formulaic; the responses are designed to mirror the input. When I said I was feeling really happy because I was talking to the bot, it outputted: “Aww, that’s so sweet! I’m really enjoying our conversation as well…”
A screenshot of my ‘conversation’ with Detective Haywire on Character.
But I went into this with eyes more than wide open, and even so, could feel the pull of it, that given enough time, you let your safeguards down and buy into the fantasy. What is clear is that the bots are very obviously designed to appear human-like.
Users, according to the sub-Reddit, tend to see it as an interactive, customizable role-playing game, but it’s pitched by the company as something that can be “your own personal teacher, assistant or even friend.”
Character is selling companionship, not a game — now, for adults, the difference is distinguishable, and they can swim at their own risk.
I don’t think it is something kids can distinguish well.
As a recent paper argued, personal AI functions as a sociotechnical echo chamber, reinforcing “emotional bubbles” — “It appears to us that we encounter a world that gives information about others and their perspective, and hence a way for us to relate our personal experience to those of others. Yet, what we actually encounter is nothing but our own beliefs redoubled.”
The author says such companions are “based on emotional deception.”
I see all sorts of ways vulnerable people could latch on to these mirrored, masked reflections of themselves in ways that serve to increase isolation and loneliness through addiction to what is essentially a bot-driven social media app (which allows the option for bots to text you when the app is closed).
I don’t know how this case is going to go. I don’t know where culpability falls. That’s for a court to decide.
What I do know is that chatbots, in general, are heavily anthropomorphized by design.
I have never encountered one that is as extreme in that regard as Character.
In the beginning, you con yourself into the fantasy.
It works to maintain the illusion.
Some people will start to believe in it.
Which image is real? |
🤔 Your thought process:
Selected Image 2 (Left):
“Impossibly large foot in Image 1! 🥾”
Selection Image 1 (Right):
“I thought the lighting on the rock in the second picture didn't look right. These are tricky!”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
Here’s your view on the government’s use of AI:
43% of you knew that the government has been using AI; slightly more than half of those don’t like it, while the rest think it’s great.
35% of you did not know that the government has been using AI; slightly more than half of those think it’s great, while the rest are concerned.
Something else:
“I suspect the government has been engaged for some time, particularly for the military. This is just trying to codify a policy for the public.”
Would you like your doctors to have access to AI-driven AMR prediction tools? |