Potential Harm of AI in Education
In the education sector, AI has some serious failings that educators must be aware of and address.
Data Privacy and Security: Queries and conversations used in LLMs are stored and accessed by corporations to train these models. There are risks associated with data that we feed into these AI tools, and it is important for schools and teachers to understand these risks.
Misinformation: Given how new some of these models are, it is a well known fact that tools like ChatGPT “hallucinate”, and their information is not always accurate, a fact acknowledged even by the companies who build these tools. Therefore, students and educators must be aware of the limitations of these tools.
Bias: AI systems are designed and built by humans, and sometimes, their inherent biases creep into these models. ChatGPT specifically has well-documented examples of displaying biases against minorities who have been historically discriminated against.
Plagiarism: Earlier this week, in a conversation with a few undergraduate students from one of the premier universities in our country, I was surprised to learn how pervasive ChatGPT has become in students using it for essays, answering their assignments and final projects. This is a fast-approaching, if not already present, issue at most schools in Washington State. Critical thinking and problem solving skills of students could get stunted if AI technology is used too liberally by students.
Inequality and disparity between students: As schools across the nation take a different stance on its usage and training, there will be disparity between students who know how to use the new technology and those that have either not experienced it or are not very adept in using it. As a point of example, many public schools in New York banned ChatGPT in schools, while private schools in NYC were introducing prompt engineering for ChatGPT into their curriculums.
Enforcement: While tools exist to check for integrity issues and plagiarism, this will be a cat and mouse game for many years as many of these tools are concerningly liable to false positives. Enforcement will be murky and risks falsely implicating students for plagiarism.
Detectors of AI plagiarism - including one made by Turnitin, a service used by many schools for plagiarism checking - can and do flag non-plagiarized essays as written by AI, and in recent cases have caused many false accusations of plagiarism.
AI detection will always be a flawed tool as long as the industry keeps developing and improving AI tools at a breakneck pace. A better solution is to teach students about AI’s significant shortcomings, its strengths and flaws, and set up reasonable guidelines for its use, rather than a punitive, blanket ban which will likely not stifle AI use, but will simply move it underground.