> Do you have any example why you believe this is effective?
For one, I recently expressed my doubts about this kind of use to a MD in my circles. Turns out, his opinion is that it is useful in that capacity if used responsibly. Moreover, he said ChatGPT a very useful tool in many aspects of day-to-day medical practice too, as long as you are aware of its limitations and don't abdicate your responsibility as a doctor.
(Hearing of doctors consulting with ChatGPT shouldn't be too shocking, at least not after the initial shock one may get after learning just how much doctors rely on googling stuff when you're not looking.)
Secondly, I found it useful to bounce some of my own worries off a SOTA LLM, as it can tease out the patterns from incoherent emotional rambling, and give me some starting points for further research, which I otherwise wouldn't be able to formulate myself because of the whole emotional rambling thing.
And then there are the less subjective aspects:
> My guess is it's mainly lonely people having someone to talk to
You don't need to be lonely to have no one to talk about internal struggles, especially when they run deep and don't seem to get better. Trying to turn a friend or a spouse into your therapist long-term is a recipe for losing them.
As for actual therapy - it's fucking expensive. Many people who'd benefit from it can't afford it at all, and those who can are still likely to burn through some serious money before finding a good fit. And, if your case is nontrivial, then the standard therapy setup is pretty much useless - an hour of talk once a week is way too little, way too infrequently.
When you find yourself in need to continue a conversation for many long hours, or need it to be a daily thing so your mind doesn't wipe the slate clean between meetings - I don't think even a good tech salary will help you here. It's just prohibitively expensive.
So while GPT-4 or Claude 3.5 family models may not be good therapists (though they're likely better than some bad ones you might encounter), they are effectively free in comparison to the real deal. They're also available 24/7 for sessions of arbitrary length, you get to keep the transcripts to study and refer back to (or show to a real therapist if you're courageous enough :)). They also don't get tired or bored, don't mind you going off weird tangents, and if you're committed enough, they will adapt to work with you and your specific needs - you control the system prompt and the conversation context. Hell, they'll happily help you tune the system prompts.
So yeah, it's not a substitute to therapy - it's something in between therapy and self-help (googling, reading books, etc.). It's still useful, because hardly anyone can afford effective therapy, and the conversational aspects make some parts of self-help easier too.
> in the less charitable case a soundboard to confirm whatever beliefs they already hold
That's a risk that comes with empowering people to access knowledge to help themselves. Yes, there's the well-known stereotype of a modern patient, a smart-ass with WebMD-induced hypochondria - but existence of such people isn't a reason to shut down WebMD, or ban blogs and community resources covering ADHD or autism, etc. Most people use these things responsibly (even if not very effectively).
For one, I recently expressed my doubts about this kind of use to a MD in my circles. Turns out, his opinion is that it is useful in that capacity if used responsibly. Moreover, he said ChatGPT a very useful tool in many aspects of day-to-day medical practice too, as long as you are aware of its limitations and don't abdicate your responsibility as a doctor.
(Hearing of doctors consulting with ChatGPT shouldn't be too shocking, at least not after the initial shock one may get after learning just how much doctors rely on googling stuff when you're not looking.)
Secondly, I found it useful to bounce some of my own worries off a SOTA LLM, as it can tease out the patterns from incoherent emotional rambling, and give me some starting points for further research, which I otherwise wouldn't be able to formulate myself because of the whole emotional rambling thing.
And then there are the less subjective aspects:
> My guess is it's mainly lonely people having someone to talk to
You don't need to be lonely to have no one to talk about internal struggles, especially when they run deep and don't seem to get better. Trying to turn a friend or a spouse into your therapist long-term is a recipe for losing them.
As for actual therapy - it's fucking expensive. Many people who'd benefit from it can't afford it at all, and those who can are still likely to burn through some serious money before finding a good fit. And, if your case is nontrivial, then the standard therapy setup is pretty much useless - an hour of talk once a week is way too little, way too infrequently.
When you find yourself in need to continue a conversation for many long hours, or need it to be a daily thing so your mind doesn't wipe the slate clean between meetings - I don't think even a good tech salary will help you here. It's just prohibitively expensive.
So while GPT-4 or Claude 3.5 family models may not be good therapists (though they're likely better than some bad ones you might encounter), they are effectively free in comparison to the real deal. They're also available 24/7 for sessions of arbitrary length, you get to keep the transcripts to study and refer back to (or show to a real therapist if you're courageous enough :)). They also don't get tired or bored, don't mind you going off weird tangents, and if you're committed enough, they will adapt to work with you and your specific needs - you control the system prompt and the conversation context. Hell, they'll happily help you tune the system prompts.
So yeah, it's not a substitute to therapy - it's something in between therapy and self-help (googling, reading books, etc.). It's still useful, because hardly anyone can afford effective therapy, and the conversational aspects make some parts of self-help easier too.
> in the less charitable case a soundboard to confirm whatever beliefs they already hold
That's a risk that comes with empowering people to access knowledge to help themselves. Yes, there's the well-known stereotype of a modern patient, a smart-ass with WebMD-induced hypochondria - but existence of such people isn't a reason to shut down WebMD, or ban blogs and community resources covering ADHD or autism, etc. Most people use these things responsibly (even if not very effectively).