ChatGPT is Bullshit

First of all, LLMs don't lie...

To lie [is] to make a believed-false statement to another person with the intention that the other person believe that statement to be true.

but they bullshit, in the sense of Harry Frankfurt:

LLMs have no sense of truth in them (compared to something more constraint like an Interactive Theorem Prover):

Nothing in the design of language models (whose training task is to predict words given context) is actually designed to handle arithmetic, temporal reasoning, etc

Since human text (hopefully contains lots of truth by chance, the LLM again constructs thruthy things by chance,

A bullshitter can be more accurate than chance while still being indifferent to the truth of their utterances.

The authors argue that LLMs should be considered bullshit machines. And they point out that bullshit is harmful:

The conduct of civilized life, and the vitality of the institutions that are indispensable to it, depend very fundamentally on respect for the distinction between the true and the false.

Lastly, they argue that the term hallucination anthromorphises the LLM and allows to shift the blame from the LLM builder to the (anthropomorphised) LLM. Also

LLMs do not perceive, so they surely do not "mis-perceive".