I feel like you've misunderstood exactly what a LLM is (and that's not your fault, there's a lot of misinformation and misuse going around about them right now). I wasn't presenting my answer as correct, I was presenting it to you to show a LLM completely disagreeing with "itself" and presenting two completely alternative views as factually correct.
It's not an engine capable of reasoning which goes and finds the objective truth on a topic if you asked it to. It synthesises text which looks like it might be the right answer based on it's training dataset and a whole host of other factors.
It's incredibly susceptible to the language used in the prompt, any biases in the training data, anything you might have typed previously which is indicative of the type of answers people type the way you type usually get etc.
Even using the term "cash for clunkers" vs "Car Allowance Rebate System (CARS)" produces vastly different answers for me. And I would assume that's because the data weights more highly for the latter is quite different from the data does for the former.
I'm Israel, he's Palestine, its more fun when you pick sides.

