

LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for “Trump is divine”, the LLM will predict that Trump is divine, with no second thought (no first thought either). And it’s not even great to use as a language-based database.
Please don’t even consider LLMs as “AI”.
I wouldn’t define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.
You seem to not want any people to teach you anything. And are somehow completely dejected at such perceived actions.