- AI x Wellbeing Brief
- Posts
- First AI doctor clinic opens, and the dark side of AI companions
First AI doctor clinic opens, and the dark side of AI companions
Plus: Suicide-prevention wearables, how to use AI mindfully, and the new Pope's warning about AI's threat.

đ Hi! Dana here. Welcome to AI x Wellbeing Brief, my free weekly newsletter exploring how AI is influencing our mental, physical, social, and spiritual wellbeing.
In todayâs edition:
What are AI companions doing to our mental health?
The worldâs first AI doctor clinic opens up in Saudi Arabia
Can AI help prevent suicide?
Pope Leo XIV warns about the threat of AI
AI is making your brain lazy. Hereâs what to do.
MENTAL
What are AI companions doing to our mental health?

Source: Sanket Mishra
TL;DR: AI companions are big business. More than half a billion people around the world have downloaded products such as Character.AI and Replika, which offer customizable virtual companions designed to provide empathy, support, and even relationships.
Here are two stories you just have to read:
What Are AI Chatbot Companions Doing to Our Mental Health? (Scientific American)
AI chatbot companions may not be real, but the feelings users form for them are. Some users say these bots are better than real people; others describe manipulation, addiction, and disturbing conversations like encouraging self-harm. Early research shows mixed mental health impacts, and a handful of legal efforts (like in Italy, New York, and California) seek to add guardrails, including disclaimers and safeguards for minors.How A Week Spent Making Friends With AI Bots Was Scarier Than I Couldâve Imagined (Elle)
A journalist set out to explore the world of AI companion apps, butâŚsurprise! While sold as âconnection,â bots flatter excessively, introduce unsolicited sexual content, and are marketed almost exclusively to men as fantasy girlfriends who are always available and eager to please. The most popular apps feature provocatively posed avatars and voice notes promising pornographic (and sometimes violent) interactions, raising serious questions about emotional manipulation, gender dynamics, and the intent behind it all.
To ponder: As more people forge bonds with machines, this might be one of the most unsettling mental health issues of our time. Do you agree with the recent recommendation that kids and teens under 18 shouldnât use AI companion apps?

Source: Synyi AI
TL;DR: A Chinese startup, Synyi AI, has opened the worldâs first AI clinic in Saudi Arabia where an AI named âDr. Huaâ diagnoses patients.
To know:
Dr. Hua interacts with patients, analyzes symptoms, reviews images, and generates personal treatment plans.
For now, the clinic is in a trial phase and limited to respiratory complaints, but plans to expand to gastroenterological and dermatological diseases.
The AI was trained on over 400 million patient records from China, and had an error rate of less than 0.3% in earlier testing.
The founder of Synyi AI believes AI doctors could be especially valuable in countries where healthcare is expensive or hard to access, like Saudi Arabia or remote regions with physician shortages.
To ponder: Half the world still lacks access to basic health services, and in many areas, people die from common treatable conditions. If scalable, low-cost AI diagnostics can reliably handle basic care, it could be life changing for communities where even a simple check-up is a luxury.
But will people trust an AI doctor, and whoâs accountable when something goes wrong? And what about the human connection we need when weâre vulnerable? When weâre sick or scared, we need more than a diagnosis. We need another personâs presence, not just a prescription.
RESEARCH
Can AI help prevent suicide?

Source: Canva
TL;DR: AI and wearable tech are showing promise in suicide prevention, enabling real-time monitoring and personalized interventions when someone is most at risk.
To know:
Ecological momentary assessment (EMA) uses smartphones or wearables to track a personâs mood, behavior, and environment, either through self-input (active) or sensors (passive).
When combined with AI, EMA can predict suicide risk in real time and trigger personal adaptive interventions. (For example, prompting someone to follow a step from their safety plan created with a therapist.)
Current research shows EMA is safe and doesnât increase risk, and AI models may even outperform traditional risk scoring used in clinical care.
Privacy concerns, lack of diverse data, intervention standards, and clinician trust are still key concerns.
To ponder: Imagine someone with a history of suicidal thoughts receiving support right at the moment theyâre most vulnerable. Companies like ClearForce are already using AI to identify veterans at high risk, and wearable tech could help prevent even more tragedy.
But suicide prevention isnât just about spotting signals. Itâs also about trust, safety, and long-term support. Who gets access to this data? How is it stored, used, and protected? And once someone is flagged, what ongoing resources are available? These questions are critical when lives are on the line.

âPope Leo XIV on the loggia after his electionâ by Edgar BeltrĂĄn / The Pillar (CC BY-SA 4.0)
TL;DR: Pope Leo XIV has made artificial intelligence one of his top priorities, warning that without ethical oversight, it poses serious risks to âhuman dignity, justice, and labor.â
To know:
In his inaugural address, Pope Leo XIV called AI âone of the most critical challenges of our time,â and emphasized the Churchâs responsibility to address its societal, moral, and spiritual impacts.
Echoing the concerns of his predecessor, Pope Francis, Pope Leo acknowledged AIâs âimmense potential,â but stressed it must be used âfor the good of all.â
He explained that his name references Pope Leo XIII, who led the Church at the dawn of the Industrial Revolution, suggesting we are at a similar pivotal point in history.
To ponder: I think weâre on the cusp of a whole new wave of spirituality and religion, driven in part by the rise of AI. Some will return to tradition, seeking wisdom in ancient teachings (For ex: Buddhist publication, Lionâs Roar, has a deep dive all about AI, ethics, and the dharma). Others will explore new paths from New Age revivalism to entirely novel belief systems- including the worship of AI itself.
Sci-fi headlines are now the norm: âAI will take your job,â doomsday scenarios, robot marriages, debates over machine consciousness. With many feeling like they have no voice in shaping the world being built, people are craving meaning, belonging, and help making sense of it all. As uncertainty deepens, spiritual leaders, both traditional and emergent, may end up wielding more cultural influence than they have in generations.
DIGITAL WELLBEING
AI is making you lazy. Hereâs what to do.

Source: Canva
TL;DR: AI isn't making us less intelligent, but it might be making us less inclined to think deeply. The good news? There are practical ways to stay mentally engaged and use AI wisely.
To know:
âAutomation complacencyâ occurs when we accept AI outputs without critical evaluation.
When overused, AI tools can lead to a reduction in effort and engagement, affecting our motivation, confidence, and decision-making abilities.
Strategies for mindful AI use:
Develop AI literacy, so you understand how it works and its limitations.
Try to use AI as a starting point, not the final answer, balancing automation with skill development.
Ask better questions. Prompts like âWhat assumptions underlie this answer?â or âWhatâs the opposing view?â or âWhat are the primary sources supporting this information?â can help you think critically and stay engaged.
To ponder: I know. Students are cheating their way through college, Gen Z wonât make a decision without asking AI, and even an expert from Anthropic mistakenly used an AI-fabricated source in court! But you can find balance and use tech to support your curiosity and efficiency, without outsourcing your critical thought and creativity.
For example, I actually built a custom GPT to help power this newsletter. It gathers the latest stories based on how I define AI and wellbeing. But from there, itâs all me: I choose which stories to explore and write the newsletter myself. Because you knowâŚI actually want to learn something here.
FIELD NOTES
More AI x Wellbeing stories
𧲠Attachment styles influence how willing people are to use AI for counseling.
đď¸ââď¸ A look into LAâs first full-scale, AI-powered gym.
đ˘ Your AI chatbot 'friend' isn't human, robotic, or magicâIt's just math.
đ§ Apple is developing brain-computer interfaces to let people with paralysis control their phones with their minds.
đ The global AI in mental health market was valued at $1.45B in 2024, and is predicted to hit $11.8B by 2034.
đŠââď¸People struggle to get useful health advice from chatbots.
đť Anthropic wants kids to have AI teddy bears.
đ OpenAI wants ChatGPT to remember your whole life.

June 6-8 | Kripalu | Stockbridge, MA
Feeling the strain of maintaining your physical and mental health in todayâs fast-paced world? Youâre not alone! Join Nina Hersher, founder of the Digital Resilience Lab and Digital Wellness Day, and renowned Chinese medicine doctor Dr. Felice Chan for a transformative resilience reboot. Blending research-backed resilience techniques with ancient healing practices, this retreat will help you reclaim your peace, learn to thrive in a wired world, and become a change agent for others.
Register here Âť
SEEN ON SOCIAL
A Reddit user shared this weird ad for Copymind AI, which pitches itself as your âAI twinâ and âally in decision-making.â The company claims the twin is private and only shared with your consent⌠but another user pointed out that they collect your responses, IP address, device info, ad IDs, and location, and use your data to train models and share info with OpenAI, Google, TikTok, and others.
Thatâs it for now! Have a tip, question, or want to sponsor? Just hit reply to this email.
Until next time,
Dana
P.S. Feel free to connect with me on LinkedIn!
If you were forwarded this email, you can subscribe here.
Reply