• OpenAI’s CEO says using ChatGPT for therapy has serious privacy risks
  • Your private chats might be exposed if OpenAI were to face a lawsuit
  • Feeding your private thoughts into an opaque AI is also a risky move

One of the upshots of having an artificial intelligence (AI) assistant like ChatGPT everywhere you go is that people start leaning on it for things it was never meant for. According to OpenAI CEO Sam Altman, that includes therapy and personal life advice – but it could lead to all manner of privacy problems in the future.

On a recent episode of the This Past Weekend w/ Theo Von podcast, Altman explained one major difference between speaking to a human therapist and using an AI for mental health support: “Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”

One potential outcome of that is that OpenAI would be legally required to cough up those conversations were it to face a lawsuit, Altman claimed. Without the legal confidentiality that you get when speaking to doctor or a registered therapist, there would be relatively little to stop your private worries being aired to the public.

Altman added that ChatGPT is being used in this way by many users, especially young people, who might be especially vulnerable to that kind of exposure. But regardless of your age, the conversation topics are not the type of content that most people would be happy to see revealed to the wider world.

A risky endeavor

OpenAI CEO Sam Altman being interviewed by Theo Von.

(Image credit: Theo Von)

The risk of having your private conversations opened up to scrutiny is just one privacy risk facing ChatGPT users.

There is also the issue of feeding your deeply personal worries and concerns into an opaque algorithm like ChatGPT’s, with the possibility that it might be used to train OpenAI’s algorithm and leak its way back out when other users ask similar questions.

That’s one reason why many companies have licensed their own ring-fenced versions of AI chatbots. Another alternative is an AI like Lumo, which is built by privacy stalwarts Proton and features top-level encryption to protect everything you write.

Of course, there’s also the question of whether an AI like ChatGPT can replace a therapist in the first place. While there might be some benefits to this, any AI is simply regurgitating the data it is trained on. None are capable of original thought, which limits the effectiveness of the advice they can give you.

Whether or not you choose to open up to OpenAI, it’s clear that there’s a privacy minefield surrounding AI chatbots, whether that means a lack of confidentiality or the danger of having your deepest thoughts used as training data for an inscrutable algorithm.

It’s going to require a lot of effort and clarity before enlisting an AI therapist is a significantly less risky endeavor.

You might also like

By

Leave a Reply

Your email address will not be published. Required fields are marked *