Hallucinations
Another huge concern for users and owners of LLMs is hallucinations. A hallucination occurs when an LLM gives a wrong answer to a user but with extreme confidence. Instead of simply saying it doesn't know, an LLM will give an answer it believes is statistically likely to be correct, even if it makes absolutely no sense.
This doesn't happen all the time, of course, but it happens often enough that there is a great deal of concern with trust in LLMs and their outputs. Many people have also noticed that they're terrible with numbers, often being unable to even count the words in their own output.
Of course, accuracy, hallucinations, and mathematical ability will vary between LLMs. Some models do much better than others, but they're all prone to problems of this nature since they weren't trained to fully understand what they're outputting, only what is statistically likely to make sense as a sentence in the context given.