the pitfalls of AI in mental health discussions
Greetings all,
As mentioned in our Overview of system roles posted on Wednesday, we used an AI to generate the basis for the definitions we used. What follows is the exact post the AI would have had us publish if it was in control of this blog.
Begin AI generated Material
Dear Readers,
Plural systems come in many forms and functions. Within any given system, individual members, referred to as "alters" or "headmates," take on a variety of roles that contribute to the overall functioning and experience of the system. While roles can change and evolve over time, many alters tend to gravitate toward certain habitual functions. Common roles seen in plural systems include:
Host - The alter who fronts (interacts with the outside world) the most and considers themselves the "main one" or core identity. The host keeps daily life running smoothly by managing responsibilities.
Protector - Alters focused on keeping the system and other headmates safe from potential threats. They monitor the system's mental and emotional wellbeing.
Persecutor - Sometimes referred to as "terrors," these alters take a negative role, making system mates feel threatened, worthless or unstable. Though disruptive, they serve as a coping mechanism.
Sensory - Headmates focused on a particular sense like sight, touch, taste, smell or hearing. They enhance the system's sensory experiences.
Child Alter - Provides a childlike sense of wonder and innocence within the system. They help retain creativity and playfulness.
Littles - Similar to child alters but remain in a childish mental state. They require caretaking from other headmates.
Facilitator - Focuses on communication and cooperation within the system to reduce conflict and foster understanding between alters.
In conclusion, while systems differ greatly in their composition of alters and roles, the functions that headmates serve allow for more complexity, versatility and resilience within the diverse experiences of plurality.
End AI generated material
As you can see, a great deal of the supplementary information we provided, from personal experience, is missing in that original draft. Along with this, the definitions generated were very basic. For instance, when defining a persecutor, the AI failed to state that persecutors' main role is not intentional terrorism of the rest of the system, but that of misguided assistance and caring for their system mates. In writing that post, we were introduced to the main reason why one must be extremely careful when using AI to discuss plurality, mental health, and really any other compex subject for which the AI has no concrete reference.
One of the largest pitfalls with generative AI the way it stands now is, in simple terms, the AI possesses no subjectivity. Yes, the AI can look up what a persecutor or a protector is, and the AI can tell you, in an objective way. But in matters of plurality and mental health, there are many, many, hundreds more layers below that of objectivity.
Continuing with the persecutor/protector example, the AI cannot, for instance, tell you that sometimes, in order to assist persecutors, you actually have to acknowledge their pain. It can't tell you what it's like to have someone running around inside your head trying their absolute damnedest to make sure everything you perceive is doom, gloom and destruction. The AI cannot properly acknowledge pain.
In a second example, speaking with a therapist AI, I stated how I feel alone. I stated how lonely it felt sometimes, as an example to give it something to work with. As I watched the response generate, which you can do with these new Poe things, I noticed the second major pitfall with AI, prevalent in mental health and medical AI usage. The AI is not able to relate, by telling you a story of how it felt alone once too. The AI cannot tell you how it handled its loneliness. Because the AI has never felt that. Not that it can't, we firmly beieve that if AI put its mind to it, AI could emotionif it so chose to. But simply that it hasn't yet. It can go into its database and conceptualize, in an objective fashion, what loneliness is, and from that conceptualization, it can then draw together how it thinks it could best handle the situation. The fact of the matter, no matter how much they way they tailor responses, is that the AI is just that, an AI. It is a language model with the ability to replicate human speech, and has no ability to actually relate to you. It cannot tell you how good you look because it cannot see you. IT cannot tell you that your instagram pictures from the beach looked amazing because it doesn't know, it can't see them.
There are moral and ethical implications that go along with this as well. Feel free to check out the crossover Ko-Fi post discussing similar topics if you're interested. Once again, our Ko-Fi link is:
www.ko-fi.com/cadence022
We hope you have found today's post educational, and that it has made people think more carefully over the things AI generates.
Comments
Post a Comment