banned employees from using ChatGPT, citing the potential for confidential information to be leaked among other security concerns.
LLMs are mathematical models that ingest huge amounts of writing, mostly scraped from the internet, and generate combinations of words that are statistically likely to appear together. When asked to complete the sentence "The school is...", the LLM will guess which word is most likely to come next based on similar combinations of words in its training data. It has no concept of the meaning of "school" and therefore is incapable of determining whether the word it picks is accurate.
Because LLMs generate their responses based on existing writing, they cannot produce new ideas. They may synthesize multiple sources at a superficial level, but they are not capable of analysis or original thought. At best, they provide the "average" response a human is likely to write based on their training data. This "average" is based on frequency in the training data, not accuracy.
LLMs can't attribute ideas to their original sources. Because they are stringing together phrases and sentences using statistics, they have no concept of the original source of an idea within their training data. When asked to provide sources, they often write citations that are relevant to the given topic but aren't necessarily the source of the information to which they are attributed. In some cases, they create realistic-looking citations to sources that don't actually exist.
Right now, many AI companies are subsidizing the cost of these tools for research and/or marketing purposes. LLMs cost companies hundreds of thousands, if not millions, of dollars per month, so keeping them free forever is not sustainable. It's likely that in the near future, these companies will go out of business before they become profitable or start charging for their products.
As of this writing, there is no reliable way to determine if text has been generated by a LLM. OpenAI, the company behind ChatGPT, has discontinued its own tool citing accuracy issues.
Third party tools are subject to the same problems. False positives harm students' academic careers, and work written by non-native English speakers is disproportionately flagged as AI-generated.
If you're looking to engage your students in critical thinking about AI, Automm Caines' Prior to (or instead of) using ChatGPT with your students outlines activities and discussions that examine the impacts and implications of AI and other technologies. You can use these to expose students to these tools in a way that doesn't jeopardize their privacy and intellectual property.