First AI doctor clinic opens, and the dark side of AI companions

Plus: Suicide-prevention wearables, how to use AI mindfully, and the new Pope's warning about AI's threat.

AI s Wellbeing Brief newsletter

👋 Hi! Dana here. Welcome to AI x Wellbeing Brief, my free weekly newsletter exploring how AI is influencing our mental, physical, social, and spiritual wellbeing.

In today’s edition:

  • What are AI companions doing to our mental health?

  • The world’s first AI doctor clinic opens up in Saudi Arabia

  • Can AI help prevent suicide?

  • Pope Leo XIV warns about the threat of AI

  • AI is making your brain lazy. Here’s what to do.

MENTAL
What are AI companions doing to our mental health?

Source: Sanket Mishra

TL;DR: AI companions are big business. More than half a billion people around the world have downloaded products such as Character.AI and Replika, which offer customizable virtual companions designed to provide empathy, support, and even relationships.

Here are two stories you just have to read: 

  • What Are AI Chatbot Companions Doing to Our Mental Health? (Scientific American)
    AI chatbot companions may not be real, but the feelings users form for them are. Some users say these bots are better than real people; others describe manipulation, addiction, and disturbing conversations like encouraging self-harm. Early research shows mixed mental health impacts, and a handful of legal efforts (like in Italy, New York, and California) seek to add guardrails, including disclaimers and safeguards for minors.

  • How A Week Spent Making Friends With AI Bots Was Scarier Than I Could’ve Imagined (Elle)
    A journalist set out to explore the world of AI companion apps, but…surprise! While sold as “connection,” bots flatter excessively, introduce unsolicited sexual content, and are marketed almost exclusively to men as fantasy girlfriends who are always available and eager to please. The most popular apps feature provocatively posed avatars and voice notes promising pornographic (and sometimes violent) interactions, raising serious questions about emotional manipulation, gender dynamics, and the intent behind it all. 

To ponder: As more people forge bonds with machines, this might be one of the most unsettling mental health issues of our time. Do you agree with the recent recommendation that kids and teens under 18 shouldn’t use AI companion apps?

Patients interact with AI Dr. Hua

Source: Synyi AI

TL;DR: A Chinese startup, Synyi AI, has opened the world’s first AI clinic in Saudi Arabia where an AI named “Dr. Hua” diagnoses patients.

To know:

  • Dr. Hua interacts with patients, analyzes symptoms, reviews images, and generates personal treatment plans.

  • For now, the clinic is in a trial phase and limited to respiratory complaints, but plans to expand to gastroenterological and dermatological diseases.

  • The AI was trained on over 400 million patient records from China, and had an error rate of less than 0.3% in earlier testing.

  • The founder of Synyi AI believes AI doctors could be especially valuable in countries where healthcare is expensive or hard to access, like Saudi Arabia or remote regions with physician shortages.

To ponder: Half the world still lacks access to basic health services, and in many areas, people die from common treatable conditions. If scalable, low-cost AI diagnostics can reliably handle basic care, it could be life changing for communities where even a simple check-up is a luxury.

But will people trust an AI doctor, and who’s accountable when something goes wrong? And what about the human connection we need when we’re vulnerable? When we’re sick or scared, we need more than a diagnosis. We need another person’s presence, not just a prescription.

Can AI help prevent suicide

Source: Canva

TL;DR: AI and wearable tech are showing promise in suicide prevention, enabling real-time monitoring and personalized interventions when someone is most at risk.

To know:

  • Ecological momentary assessment (EMA) uses smartphones or wearables to track a person’s mood, behavior, and environment, either through self-input (active) or sensors (passive).

  • When combined with AI, EMA can predict suicide risk in real time and trigger personal adaptive interventions. (For example, prompting someone to follow a step from their safety plan created with a therapist.)

  • Current research shows EMA is safe and doesn’t increase risk, and AI models may even outperform traditional risk scoring used in clinical care.

  • Privacy concerns, lack of diverse data, intervention standards, and clinician trust are still key concerns.

To ponder: Imagine someone with a history of suicidal thoughts receiving support right at the moment they’re most vulnerable. Companies like ClearForce are already using AI to identify veterans at high risk, and wearable tech could help prevent even more tragedy.

But suicide prevention isn’t just about spotting signals. It’s also about trust, safety, and long-term support. Who gets access to this data? How is it stored, used, and protected? And once someone is flagged, what ongoing resources are available? These questions are critical when lives are on the line.

Pope Leo XIV addresses AI

“Pope Leo XIV on the loggia after his election” by Edgar Beltrán / The Pillar (CC BY-SA 4.0)

TL;DR: Pope Leo XIV has made artificial intelligence one of his top priorities, warning that without ethical oversight, it poses serious risks to “human dignity, justice, and labor.”

To know:

  • In his inaugural address, Pope Leo XIV called AI “one of the most critical challenges of our time,” and emphasized the Church’s responsibility to address its societal, moral, and spiritual impacts.

  • Echoing the concerns of his predecessor, Pope Francis, Pope Leo acknowledged AI’s “immense potential,” but stressed it must be used “for the good of all.”

  • He explained that his name references Pope Leo XIII, who led the Church at the dawn of the Industrial Revolution, suggesting we are at a similar pivotal point in history.

To ponder: I think we’re on the cusp of a whole new wave of spirituality and religion, driven in part by the rise of AI. Some will return to tradition, seeking wisdom in ancient teachings (For ex: Buddhist publication, Lion’s Roar, has a deep dive all about AI, ethics, and the dharma). Others will explore new paths from New Age revivalism to entirely novel belief systems- including the worship of AI itself.

Sci-fi headlines are now the norm: “AI will take your job,” doomsday scenarios, robot marriages, debates over machine consciousness. With many feeling like they have no voice in shaping the world being built, people are craving meaning, belonging, and help making sense of it all. As uncertainty deepens, spiritual leaders, both traditional and emergent, may end up wielding more cultural influence than they have in generations.

Too much AI reduces critical thinking and decision making

Source: Canva

TL;DR: AI isn't making us less intelligent, but it might be making us less inclined to think deeply. The good news? There are practical ways to stay mentally engaged and use AI wisely.

To know:

  • “Automation complacency” occurs when we accept AI outputs without critical evaluation.

  • When overused, AI tools can lead to a reduction in effort and engagement, affecting our motivation, confidence, and decision-making abilities.

  • Strategies for mindful AI use:

    • Develop AI literacy, so you understand how it works and its limitations.

    • Try to use AI as a starting point, not the final answer, balancing automation with skill development.

    • Ask better questions. Prompts like “What assumptions underlie this answer?” or “What’s the opposing view?” or “What are the primary sources supporting this information?” can help you think critically and stay engaged.

To ponder: I know. Students are cheating their way through college, Gen Z won’t make a decision without asking AI, and even an expert from Anthropic mistakenly used an AI-fabricated source in court! But you can find balance and use tech to support your curiosity and efficiency, without outsourcing your critical thought and creativity.

For example, I actually built a custom GPT to help power this newsletter. It gathers the latest stories based on how I define AI and wellbeing. But from there, it’s all me: I choose which stories to explore and write the newsletter myself. Because you know…I actually want to learn something here.

FIELD NOTES
More AI x Wellbeing stories

🧲 Attachment styles influence how willing people are to use AI for counseling.

🏋️‍♀️ A look into LA’s first full-scale, AI-powered gym.

🔢 Your AI chatbot 'friend' isn't human, robotic, or magic—It's just math.

🧠 Apple is developing brain-computer interfaces to let people with paralysis control their phones with their minds.

📈 The global AI in mental health market was valued at $1.45B in 2024, and is predicted to hit $11.8B by 2034.

👩‍⚕️People struggle to get useful health advice from chatbots.

🐻 Anthropic wants kids to have AI teddy bears.

📓 OpenAI wants ChatGPT to remember your whole life.

June 6-8 | Kripalu | Stockbridge, MA

Feeling the strain of maintaining your physical and mental health in today’s fast-paced world? You’re not alone! Join Nina Hersher, founder of the Digital Resilience Lab and Digital Wellness Day, and renowned Chinese medicine doctor Dr. Felice Chan for a transformative resilience reboot. Blending research-backed resilience techniques with ancient healing practices, this retreat will help you reclaim your peace, learn to thrive in a wired world, and become a change agent for others.

Register here Âť

SEEN ON SOCIAL

A Reddit user shared this weird ad for Copymind AI, which pitches itself as your “AI twin” and “ally in decision-making.” The company claims the twin is private and only shared with your consent… but another user pointed out that they collect your responses, IP address, device info, ad IDs, and location, and use your data to train models and share info with OpenAI, Google, TikTok, and others.

That’s it for now! Have a tip, question, or want to sponsor? Just hit reply to this email.

Until next time,
Dana

P.S. Feel free to connect with me on LinkedIn!

If you were forwarded this email, you can subscribe here.

Reply

or to participate.