Skip to main content

Therapists Face Backlash for Secretly Using ChatGPT in Client Sessions

What Happened

An increasing number of therapists have begun using OpenAI\’s ChatGPT as an aid in client counseling sessions, sometimes without disclosing this to patients. According to an MIT Technology Review report, some clients felt betrayed or triggered upon discovering that AI was used without their explicit consent. The practice raises questions around patient privacy and transparency, as well as the clinical appropriateness of using AI-generated responses during mental health care. While some therapists cite efficiency and creative support as benefits, the lack of clear guidelines and protections for sensitive health data is concerning both for practitioners and clients.

Why It Matters

The secretive use of ChatGPT in therapy sessions highlights emerging ethical dilemmas in AI adoption within sensitive domains like mental health. The incident underscores the pressing need for regulatory guidance and professional standards to ensure confidentiality, informed consent, and responsible AI use in healthcare. Read more in our AI News Hub

BytesWall Newsroom

The BytesWall Newsroom delivers timely, curated insights on emerging technology, artificial intelligence, cybersecurity, startups, and digital innovation. With a pulse on global tech trends and a commitment to clarity and credibility, our editorial voice brings you byte-sized updates that matter. Whether it's a breakthrough in AI research or a shift in digital policy, the BytesWall Newsroom keeps you informed, inspired, and ahead of the curve.

Related Articles