Yet another thing solved by AgentV3N's prompt routing.
We're moving closer and closer to using 1B models for the first stage of prompt evaluation. A simple "thank you" could be met with an agent-level prepared response, or even just an emoji reaction.
Meanwhile, ChatGPT by default uses the last model you used in the conversation. Imagine blowing o1 reasoning on "thanks mate". Wouldn't be me.
The survey found that around 70% of people are polite to AI when interacting with it, with 12% being polite in case of a robot uprising.Holy shit people are stupid.
@mischievoustomato@tsundere.love @r000t@ligma.pro I think anthropomorphizing chatbots is a really bad as the big AI companies will abuse this to create deceptive user experiences.
@SuperDicq
What do you consider "deceptive" in this circumstance? 
I can see a mihoyo-shaped object being like "Don't you love me enough to get the Professional plan?" but besides that I don't see a "persistent" customized character (what we call an Agent) being too terrible. It's basically a secretary at that point. 
@mischievoustomato
@r000t@ligma.pro @mischievoustomato@tsundere.love I am a firm believer that LLMs should return de-anthropomorphized responses only, like this example.
Why? Because normies are stupid and otherwise will start to think that there's some actual sort of sentience behind the scenes or something.
@mischievoustomato
Starfall makes hers pretend to be an exploding penguin from some weeb game and address her as the rule 63 protag from the same weeb game. 
The default agent that will likely ship is called "Standard" and is somewhat close to being deanthropomorphized
Mine is the namesake of the platform itself and I can't wait to reveal him to everyone. 
@SuperDicq