Reports of LLMs mastering math have been greatly exaggerated
by Ernest Davis
The refusal of these kinds of AI to admit ignorance or incapacity and their obstinate preference for generating incorrect but plausible-looking answers instead are one of their most dangerous characteristics. It is extremely easy for a user to pose a question to an LLM, get what looks like a valid answer, and then trust to it, without doing the careful inspection necessary to check that it is actually right.
This is an area where human experts and LLMs diverge. Both are capable of making mistakes, but humans who have expertise in a certain area are generally more aware of the limits and gaps in their knowledge.