- The Deep View
- Posts
- ⚙️ People aren’t using AI for work. They’re using it for therapy
⚙️ People aren’t using AI for work. They’re using it for therapy

Good morning. This is a big week for earnings, though the big banks are in the spotlight, rather than Big Tech. We might begin to see an indication of how companies are navigating the (chaotic) macroeconomic environment we’ve been seeing lately.
A relevant podcast for today’s main story: A different kind of artificial companionship.
— Ian Krietzberg, Editor-in-Chief, The Deep View
In today’s newsletter:
🌊 AI for Good: Plastic pollution
💻 OpenAI is retiring GPT-4
👁️🗨️ Microsoft confirms that it is ‘slowing or pausing’ some data center projects
⚕️ People aren’t using AI for work. They’re using it for therapy
AI for Good: Plastic pollution

Source: The Ocean Cleanup
The Ocean Cleanup is a nonprofit on a simple mission: clear 90% of floating plastic trash out of our oceans by 2040.
The organization has been working for more than a decade to develop scalable technologies to serve that mission; to date, its crews have removed more than 44 million pounds of plastic from the ocean.
It’s using AI for help.
The details: “Data is key,” according to the organization’s Ocean Director, Henk Van Dalen. “It will help us to increase the effectiveness of our operations.” The challenge is ensuring that its crews are operating in so-called “hotspots,” or areas where plastic density is the highest. The difficulty with that is those hotspots tend to move around.
The team’s solution features AI-powered cameras, a system dubbed ADIS (Automatic Debris Imaging System).
The cameras get mounted on a variety of partner ships that travel the globe; the algorithm parses the data, then uploads that data to the cloud where separate algorithms process it all, identifying those hotspots.
The context: Here, we’re dealing with image processing algorithms, not language models. It’s a great example of the cost-benefit analysis that should be applied to AI; lightweight, low-energy-intensity models that result in the widespread removal of practice pollution are worth the expenses necesssary to produce them.

Fyxer AI: Gain 1 hour every day with an AI executive assistant.
Meet Fyxer, your AI Executive Assistant. Fyxer AI will get you back one hour, every day. Begin your day with emails neatly organized, replies crafted to match your tone, and crisp notes from every meeting.
Email Organization: Fyxer prioritizes important emails, ensuring high-value messages are addressed quickly while filtering out the spam.
Automated Email Drafting: Fyxer drafts personalized email responses in your tone of voice. It's so accurate that 63% of emails are sent without edits.
Meeting Notes: Fyxer listens to your video calls and sends a meeting summary with clear next steps.
Fyxer AI is even adaptable to teams, learning from team members' communication styles and enhancing productivity across the board.
Setting up Fyxer AI is easy—it takes just 30 seconds to get started with Gmail or Outlook, no learning curve or training needed.
There's no credit card required for the 7-day free trial, making it a risk-free productivity booster. Start your free trial now.
OpenAI is retiring GPT-4

Source: OpenAI
Beginning April 30, OpenAI’s workhorse GPT-4 model will no longer undergird ChatGPT.
According to a release note from last week, OpenAI will be replacing GPT-4 with GPT-4o, the multi-modal model OpenAI unveiled in May of last year.
The details: OpenAI said that the older model will still be available in the API.
“GPT‑4 marked a pivotal moment in ChatGPT’s evolution,” according to the note. “We’re grateful for the breakthroughs it enabled and for the feedback that helped shape its successor. GPT‑4o builds on that foundation to deliver even greater capability, consistency, and creativity.”
And as OpenAI moves to replace GPT-4, the startup is reportedly on the verge of launching a new family of models to replace GPT-4o: GPT-4.1, an upgrade that could come as soon as this week.
The landscape: OpenAI’s model roadmap — always convoluted — is starting to get messy. CEO Sam Altman said in February that the long-awaited GPT-5 would function as a system that integrates all of OpenAI’s other models. He said that release is just months away.
Later that month, OpenAI debuted GPT-4.5, a wildly expensive model that OpenAI might never release.
And earlier in April, Altman posted about a “change of plans,” saying that the startup now plans to release “o3 and o4-mini after all.” He had said in February that those models would never get released as stand-alones; they were to be incorporated instead into GPT-5.
He said that those standalone releases will come “in a couple of weeks.”
At the same time, the FT reported that OpenAI has slashed the time and resources it spends on safety-testing its models. One source, responsible for safety testing OpenAI’s o3 model, told the publication: “We had more thorough safety testing when [the technology] was less important.”
OpenAI did not respond to a request for comment.


Legal support: 12 former OpenAI employees have filed an amicus brief in Elon Musk’s lawsuit against OpenAI and Sam Altman, writing that Altman “was a person of low integrity who had directly lied to employees about the extent of his knowledge and involvement in OpenAI’s practices of forcing departing employees to sign lifetime non-disparagement agreements.”
The trade war: The White House said over the weekend that tariff exemptions will apply to computers, smartphones and chip-making equipment, something that, according to Wedbush analyst Dan Ives, “changes the entire situation for tech stocks with this black swan event for the industry removed.” Still, it doesn’t mean Big Tech is out of the woods yet; Trump has said that he will impose tariffs against semiconductor chips. Eventually. Maybe. And the phones and computers won’t be exempt from that second round of tariffs.

The social security administration is gutting regional staff and shifting all public communication to X (Wired).
Netflix tests OpenAI search engine for movie, show recommendations (Bloomberg).
Elon Musk’s xAI is powering its facility in Memphis with ‘illegal’ generators (The Guardian).
The oxymoron of Mira Murati’s reported $2 billion seed round (Fortune).
Investors are growing concerned about a U.S. asset exodus as Treasurys and the dollar decline (CNBC).
Microsoft confirms that it is ‘slowing or pausing’ some data center projects

Source: Microsoft
The AI infrastructure build-out, already facing pressure from increasingly cautious investors, has come under more serious scrutiny in light of President Donald Trump’s tariff announcements and the trade wars they have incited.
The scale of the infrastructure construction on display here is being looked at as a bellwether for the health of the broader AI trade; if the hyperscalers are spending billions on AI-optimized data centers, it means two things: one, that the hyperscalers do themselves believe in the legitimate potential of the technology, and two, that Nvidia and TSMC will keep outperforming, which is good for their stock prices.
The details: Noelle Walsh, president of Microsoft’s Cloud Operations, confirmed in a LinkedIn post last week that Microsoft remains committed to spending $80 billion in data center investments this year.
She added that customer demand “continues to increase.”
However, explaining that Microsoft’s infrastructure approach involves a multi-year planning process, Walsh said: “By nature, any significant new endeavor at this size and scale requires agility and refinement as we learn and grow with our customers. What this means is that we are slowing or pausing some early-stage projects.”
She confirmed, though, that “our commitment to advancing AI technology and delivering value to our customers is unwavering.”
Deepwater’s Gene Munster acknowledged that this is a “directionally negative” confirmation, since it is likely to reduce Microsoft’s capex for 2026, which could impact the circular loop of the AI ecosystem.
Still, he said that the “potential that drove AI stocks higher in 2024 remains intact. At Deepwater, we continue to believe there are still 2–3 years left in this bull market, one that will ultimately end in a bubble burst.”
People aren’t using AI for work. They’re using it for therapy

Source: Generated with AI by The Deep View
The battle for AI adoption has been inconsistent.
Enterprise curiosity has certainly been piqued, but most companies can’t really figure out where, when or how to use it.
There is a jagged edge of consumer adoption, meanwhile, where viral moments tend to do more to pull users onto a platform than the offer of genuine added value. I’m thinking here of the Studio Ghibli moment, which briefly burned up OpenAI’s GPUs and has since faded into the ether of past viral trends.
Harvard Business Review (HBR) recently updated its list of the top 100 generative AI use cases for 2025, a piece of research derived from an “expert-driven curation of public discourse” that mainly featured Reddit forums.
The details: The research found that technical applications — writing, for example — have become less popular in 2025, while personal, emotional applications have become more popular.
According to HBR, the top way people are using generative AI in 2025 is for therapy and companionship, up from the number two slot last year. After that, people are using chatbots for help organizing their lives, for finding purpose and for enhanced learning.
We don’t get to a business use case until the number five slot, in which people are using chatbots to generate code.
According to HBR, there are three main advantages to the iea of GenAI therapy: it’s available 24/7, it’s potentially both more accessible and less expensive than traditional therapy and it “comes without the prospect of judgment from another human being.”
“I talk to it everyday,” one user wrote. “It helps me with my brain injury daily struggles. It's helping me work thru family shame, brain fog, inability to focus, remind of what I have accomplished as I don't have any memory. It helps me decide what to eat, how to manage my day. It has saved my sanity.”
Another said: “A looooot of lonely people will let the AI-Version of the person that rejected them say that they love them. I predict some really sad and horrifying shit.”

And another wrote: “One thing I find especially useful is having it give advice on what I should do do next. It's helped me establish my core values, my principles, and my life goals. It's helped me introspect, too — figuring out what I'm all about, what I need to change, how to change. It's helped me declutter my brain, make better plans, make things more achievable.”

Beyond the fact that this isn’t what I’d call a rigorous study (Reddit …), I find myself thinking hard about the more rigorous, qualitative study published last year by MIT’s Sherry Turkle: ‘who do we become when we talk to machines?’
Chatbots have not lived a human life. They do not have bodies; they do not fear illness and death. They do not know what it is like to start out small and dependent and grow up, so that you are in charge of your own life but still feel many of the insecurities you knew as a child.
Machines cannot put themselves in your shoes, and they cannot commit to you. To put it too bluntly, if you turn away from them to make dinner or attempt suicide, it is the same to them …
We are accustomed to this cycle: Technology dazzles but erodes our emotional capacities. Then, it presents itself as a solution to the problems it created.
For so many, the performance of empathy is empathy enough. The desire to sidestep vulnerability links the field of artificial intimacy to studies that suggest a near epidemic of emotional fragility … If we live in a culture where significant numbers of people say they should never have to be made uncomfortable, and since discomfort, disappointment and challenges are part of most human relationships, that is a dilemma. Social media, texting, and pulling away from face-to-face conversations get you some relief. Talking to chatbots gets you a lot further
Social media, Turkle writes, “gave us the illusion of companionship without the demands of intimacy.” Generative AI takes that much, much further, with anthromorphic chatbots — by design — conforming to user-specific preferences and data, filling specific roles, such as therapist, tutor and friend.
We see this with Replika and with Character. And the early innings of this fundamental shift in society have been marred, already, by multiple chatbot-related suicides and dramatic personality shifts.
Privacy and security aside, chatbots represent an unreliable, biased, twisted mirror, a mirror in which bubbles and biases get reinforced, vital therapeutic boundaries get lost and people can forget those struggles that make them human.
“First, we talked to each other. Then we talked to each other through machines. Now, we talk directly to programs,” Turkle writes. “We treat these programs as though they were people. Will we be more likely to treat people as though they are programs? Will we find other people exhausting because we are transfixed by mirrors of ourselves?”


Which image is real? |



🤔 Your thought process:
Selected Image 1 (Left):
“I usually do these challenges on my desktop PC with a huge 4K monitor where I can zoom in and look for irregularities, but this time I tried it on my tiny phone screen and I just had to go with my gut. Now, was I really right? Or did I just get lucky?”
Selected Image 1 (Left):
“Not a welder but even I know that Image 2's mask is unsafe.”
💭 A poll before you go
Thanks for reading today’s edition of The Deep View!
We’ll see you in the next one.
P.S. Enjoyed reading? Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning. Subscribe here!
Here’s your view on OpenAI’s altruism:
37% of don’t believe for one single second that OpenAI’s conversion is ‘in service to its mission.’ 17% think Musk is right and the conversion shouldn’t go through.
20% think they used to be altruistic but not anymore.
12% think they are absolutely altruistic.
Not for a second:
“Just that Altman is back and the whole board has turned over since his return is suspect. Also, the fact that he personally will receive a stake confirms it for me.”
Not even for a second:
“I was born during the day, but it wasn't yesterday. HOWEVER, Musk needs to pipe down as well. He could end so many problems the world over with his wealth and instead wants to bully OpenAI? It's money all the way down, folks...”
How do you use GenAI? |
If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here.