Hallucinations
Another huge concern for users and owners of LLMs is hallucinations. A hallucination occurs when an LLM gives a wrong answer to a user but with extreme confidence. Instead of simply saying it doesn't know, an LLM will give an answer it believes is statistically likely to be correct, even if it makes absolutely no sense.
This doesn't happen all the time, of course, but it happens often enough that there is a great deal of concern with trust in LLMs and their outputs. Many people have also noticed that they're terrible with numbers, often being unable to even count the words in their own output.