I think the LLM won here. If you’re being accusational and outright saying its previous statement is a lie, you’ve already made up your mind. The chatbot knows it can’t change your mind, so it suggests changing the topic.
It’s not a spokesperson/bot for Microsoft, not a lawyer. So it knows when it should shut itself off.
I don’t know why the discourse about AI has become so philosophical.
When I’m playing a single-player game and I say “the AI opponents know I’m hiding behind cover, so they threw a grenade!”, I don’t mean that the video game gained sentence and discovered the best thing to do to win against me.
When playing a stealth game, we say “The enemy can’t see you if you’re behind cover”, not “The enemy has been programmed to not take any action the player character when said player character is identified as being granted the Cover status”.
I think the LLM won here. If you’re being accusational and outright saying its previous statement is a lie, you’ve already made up your mind. The chatbot knows it can’t change your mind, so it suggests changing the topic.
It’s not a spokesperson/bot for Microsoft, not a lawyer. So it knows when it should shut itself off.
The chatbot doesn’t know anything. It has no state like that, your text just gets appended to it’s text.
It has been prompted to disengage from disagreement or something similar. By a human designer.
I don’t know why the discourse about AI has become so philosophical.
When I’m playing a single-player game and I say “the AI opponents know I’m hiding behind cover, so they threw a grenade!”, I don’t mean that the video game gained sentence and discovered the best thing to do to win against me.
When playing a stealth game, we say “The enemy can’t see you if you’re behind cover”, not “The enemy has been programmed to not take any action the player character when said player character is identified as being granted the Cover status”.
To add, i have seen this behavior the moment you get to argumentative so its not like its purposely singling some topics out.