Should I use AI in my classroom?

#ai #teaching

Keeping up with AI developments can feel like drinking out of a fire hose, so many educators are wrestling with this question. There is no one-size-fits-all answer, but my hope in this post is to provide background and suggestions as you navigate the flood of information to help separate hype from practicality.

Large Language Models

Artificial Intelligence (AI) is a broad term without one single agreed-upon meaning. Large Language Models (LLMs) are one type of generative AI that use large data sets of text to generate new text, such as ChatGPT. Many platforms marketed towards educators utilize LLMs, and this post will focus primarily on those.


Before deep-diving into AI platforms, here are some considerations to keep in mind.

Student Privacy

Most, if not all, AI platforms have an age requirement that excludes students below age 13, 16, or 18. Many platforms also train their models with user-provided data. This means providing any student information or work, even if the student isn't directly interacting with the platform, may be a violation of FERPA regardless of student age. Once input has been used to train the model, districts no longer have ownership over it and it cannot be retrieved or deleted should a student or guardian make the request. Input used to train the model may also make its way into responses that are generated for other users. Several companies, including Apple, Goldman Sachs, and Samsung, have banned employees from using ChatGPT, citing the potential for confidential information to be leaked among other security concerns.


LLMs are mathematical models that ingest huge amounts of writing, mostly scraped from the internet, and generate combinations of words that are statistically likely to appear together. When asked to complete the sentence "The school is...", the LLM will guess which word is most likely to come next based on similar combinations of words in its training data. It has no concept of the meaning of "school" and therefore is incapable of determining whether the word it picks is accurate.


Because LLMs generate their responses based on existing writing, they cannot produce new ideas. They may synthesize multiple sources at a superficial level, but they are not capable of analysis or original thought. At best, they provide the "average" response a human is likely to write based on their training data. This "average" is based on frequency in the training data, not accuracy.


LLMs can't attribute ideas to their original sources. Because they are stringing together phrases and sentences using statistics, they have no concept of the original source of an idea within their training data. When asked to provide sources, they often write citations that are relevant to the given topic but aren't necessarily the source of the information to which they are attributed. In some cases, they create realistic-looking citations to sources that don't actually exist.


Right now, many AI companies are subsidizing the cost of these tools for research and/or marketing purposes. LLMs cost companies hundreds of thousands, if not millions, of dollars per month, so keeping them free forever is not sustainable. It's likely that in the near future, these companies will go out of business before they become profitable or start charging for their products.

AI Detection

As of this writing, there is no reliable way to determine if text has been generated by a LLM. OpenAI, the company behind ChatGPT, has discontinued its own tool citing accuracy issues.

Third party tools are subject to the same problems. False positives harm students' academic careers, and work written by non-native English speakers is disproportionately flagged as AI-generated.

Impacts and Implications of AI

If you're looking to engage your students in critical thinking about AI, Automm Caines' Prior to (or instead of) using ChatGPT with your students outlines activities and discussions that examine the impacts and implications of AI and other technologies. You can use these to expose students to these tools in a way that doesn't jeopardize their privacy and intellectual property.

Further reading