FWIW, this is why AI researchers have been screeching for decades not to create an AI that is anthropomorphized. It is already an issue we have with animals, now we are going to add a confabulation engine to the ass-end?
LLMs are trained on human writing, so they’ll always be fundamentally anthropomorphic. you could fine-tune them to sound more clinical, but it’s likely to make them worse at reasoning and planning.
for example, I notice GPT5 uses “I” a lot, especially saying things like “I need to make a choice” or “my suspicion is.” I think that’s actually a side effect of the RL training they’ve done to make it more agentic. having some concept of self is necessary when navigating an environment.
philosophical zombies are no longer a thought experiment.
I was once in a restaurant and behind me was a group of 20 something year old people. Overheard someone asking something like:"so what are y’alls thoughts about VR? (This was just before the whole AI boom.)
And one guy said:“ith’s kind of scary to think about.”
I was super confused at that point, and they talked about how they heard people disappear in the cyberspace and people not knowing what’s real and what’s just VR.
I don’t think they were stupid, but they formed a very strong opinion about something they clearly didn’t know anything about.
Personally, I hate the idea of not doing something because there’s idiots out there who will fuck themselves up on it. The current gen of AI might be a waste of resources and the whole concept of the goal of AI might be incompatible with society’s existence; those are good reasons to at least be cautious about AI.
I don’t think people wanting to have relationships with an AI is a good reason to stop it, especially considering that it might even be a good option for some people who would otherwise just have no one or maybe too many cats for them to care for. Consider the creepy stalker type that thinks liking someone or something gives them ownership over that person or thing. Better for them to be obsessed with an LLM they can’t hurt than a real person they might (or will make uncomfortable even of they end up being harmless overall).
FWIW, this is why AI researchers have been screeching for decades not to create an AI that is anthropomorphized. It is already an issue we have with animals, now we are going to add a confabulation engine to the ass-end?
LLMs are trained on human writing, so they’ll always be fundamentally anthropomorphic. you could fine-tune them to sound more clinical, but it’s likely to make them worse at reasoning and planning.
for example, I notice GPT5 uses “I” a lot, especially saying things like “I need to make a choice” or “my suspicion is.” I think that’s actually a side effect of the RL training they’ve done to make it more agentic. having some concept of self is necessary when navigating an environment.
philosophical zombies are no longer a thought experiment.
People have this issue with video game characters who don’t even pretend to have intelligence. This could only go wrong.
Yeah apparently even Eliza messed up with people back in the day and that’s not even an LLM.
I’m starting to realize how easily fooled people are by this stuff. The average person cannot be this stupid, and yet, they are.
I was once in a restaurant and behind me was a group of 20 something year old people. Overheard someone asking something like:"so what are y’alls thoughts about VR? (This was just before the whole AI boom.) And one guy said:“ith’s kind of scary to think about.” I was super confused at that point, and they talked about how they heard people disappear in the cyberspace and people not knowing what’s real and what’s just VR.
I don’t think they were stupid, but they formed a very strong opinion about something they clearly didn’t know anything about.
I’d like to believe he heard a summary of sword art online’s plot and thought it was real
Wait it’s not? But matrix!
Personally, I hate the idea of not doing something because there’s idiots out there who will fuck themselves up on it. The current gen of AI might be a waste of resources and the whole concept of the goal of AI might be incompatible with society’s existence; those are good reasons to at least be cautious about AI.
I don’t think people wanting to have relationships with an AI is a good reason to stop it, especially considering that it might even be a good option for some people who would otherwise just have no one or maybe too many cats for them to care for. Consider the creepy stalker type that thinks liking someone or something gives them ownership over that person or thing. Better for them to be obsessed with an LLM they can’t hurt than a real person they might (or will make uncomfortable even of they end up being harmless overall).