This article has been a long time in the making. In order to create this, we had to work through many hypotheticals and real life scenarios to come out with feasible action steps if any of these situations land in your school or district. We wanted to make this type of content so that we can share our real world experiences with others so that you can be better prepared than we were when we encountered these for the first time. See below our versions (district agnostic to respect privacy), and some of the thoughts we shared on how to address each scenario.
Case Study 1: AI-Assisted Plagiarism
Scenario:
A 10th-grade student uses an AI text generator to complete a history essay. The student copies and pastes AI-generated text directly into the assignment without any modification. The teacher suspects the work isn't the student's own because of the writing style, but traditional plagiarism checkers do not flag the content since it is original, just not written by the student.
Solution:
AI Literacy for Teachers: Teachers should be trained to recognize AI-generated content by familiarizing themselves with common patterns, such as overly polished or factually correct yet superficially deep text. They could use AI tools themselves to cross-reference suspicious work. We do not recommend using AI Detection tools, as they do not work at a high enough degree of accuracy, and can fracture student-teacher relationships much less tarnish academic careers.
In-Class Writing Assignments: Assign essays or portions of essays to be written in class where AI tools are not accessible.
Ethics Discussions: Engage students in discussions about academic integrity and the role of AI in education. Explain the consequences of misuse and emphasize the importance of learning through their own work. We have begun development on online course videos to this extent.
Progress Checks: Schools could implement policies where students must submit draft progress to show their writing process. While this can lead to them using AI the entire way through, by having them write drafts in class, you can take a picture of the writing to compare it to the final draft, and track changes. Surprise! There's only so much change and writing growth possible through the course of one assignment!
Case Study 2: AI-Generated Cyberbullying
Scenario:
A group of middle school students uses an AI chatbot to generate hurtful messages targeting a peer. They input the peer's name and personal characteristics into the chatbot, which generates insults and derogatory remarks. These messages are then spread on social media anonymously.
Solution:
Digital Citizenship Education: Teach students about responsible online behavior and the ethical use of technology. Our new course materials we are generating address this. Schools can also reinforce anti-bullying protocols and make clear that using AI tools to harm others is as serious as doing so directly.
Monitoring and Reporting: Educators and administrators should establish protocols where students can report instances of cyberbullying. IT teams could monitor school networks for harmful behavior linked to AI tools.
Counseling and Restorative Justice: When AI is used maliciously, engage the involved students in counseling and restorative justice practices. Focus on rebuilding trust and creating an understanding of how their behavior affected their peers.
AI Tool Restrictions: Schools can block access to certain AI platforms or restrict their use to supervised environments like libraries or classrooms where AI use is monitored. While we understand this doesn't address home use, there is a certain comfort in knowing that these situations won't be taking place on your school grounds.
Case Study 3: AI-Created Deepfake Content
Scenario:
A high school student creates a deepfake video of their teacher saying inappropriate things using AI video editing software. The student spreads the video to classmates, causing the teacher's reputation to be questioned, despite the video being fabricated.
Solution:
Digital Forensics Awareness: Schools should provide educators with training on deepfakes and how to recognize signs of manipulated media. If a deepfake is suspected, immediate action to involve school IT staff or law enforcement may be necessary.
Clear Policy on Media Manipulation: Establish strict policies against media manipulation, including severe consequences for students caught creating and spreading deepfakes.
AI Ethics Curriculum: Include lessons on the ethical implications of using AI for creating false media, emphasizing personal responsibility and the broader impact of misinformation.
Tech Accountability: Educate students on the legal implications of creating and sharing defamatory deepfakes. Schools could involve legal experts to explain real-world consequences.
Case Study 4: AI to Circumvent School Rules
Scenario:
A student uses an AI tool to hack into the school's firewall and bypass content restrictions, allowing them to access gaming sites or restricted social media platforms during school hours. The student shares this method with peers, and soon many students are following suit.
Solution:
Stronger Cybersecurity: Schools should regularly update firewalls and security systems to prevent unauthorized access. This includes investing in AI-powered cybersecurity that adapts to malicious attempts in real-time.
Cybersecurity Awareness: Provide cybersecurity education for students, making it clear that hacking or bypassing school security systems is illegal and will result in severe consequences.
Limit Tech Privileges: Implement stricter rules for personal devices and network access. Create a tiered access system where responsible use grants more privileges, while misuse leads to restricted access.
Collaborate with IT Teams: Teachers should collaborate closely with IT staff to stay aware of emerging AI-driven hacking techniques and continuously update security protocols.
Case Study 5: AI-Driven Cheating in Exams
Scenario:
During a math exam, a student uses an AI-powered calculator to solve complex trigonometry problems instantly by secretly accessing it on a smartwatch or hidden phone. This gives the student an unfair advantage, and the use of AI is difficult to detect in real-time.
Solution:
Ban Unapproved Devices: Implement strict rules banning smart devices during exams. Proctors should carefully monitor students for signs of covert technology use, including wearable devices.
Handwritten Exams: Emphasize handwritten solutions where students must show their work, making it harder to rely solely on AI-generated answers.
Randomized Questions: Use exam software or methods that randomize questions for each student, reducing the effectiveness of AI tools during timed tests.
Encourage Process over Results: In class, encourage students to demonstrate how they arrived at their answers, focusing on their understanding of the process rather than just the final solution.
Preparing Teachers for AI Misuse
Professional Development: Schools should provide ongoing professional development to help teachers stay informed about new AI tools and potential risks. This training should also include how to use AI ethically in education.
AI Monitoring Systems: Schools should consider adopting AI-monitoring systems to track the use of AI tools within their network and flag suspicious behavior.
Clear Policies on AI Use: Schools need clear policies that outline acceptable and unacceptable uses of AI in and outside the classroom. These policies should be communicated to students and parents alike.
Open Communication: Teachers should foster open communication with students about AI, encouraging them to ask questions and discuss potential uses or misuses of technology.
Understanding Kids Will be Kids: One thing we mention with all of this is that i is our job to educate and steer kids in a certain direction, but kids will be kids. Mistakes will be made, and there's ultimately so much education we can provide to mitigate the harm that can come. Mitigating, not eliminating, is the perspective we take. The more we get ahead of ethics and proper use of these tools, the better job of mitigating we can do.
We hope this information can help any teacher or administrator deal with these problems, should they ever come up. Our society did not get ahead of social media fast enough, and we were dealing with bullying in ways we never thought were possible. Cyber bullying was invented, YikYak and those anonymous commenters were born, and fake bomb threats were created to push a test back a few days or get a free day to play hooky. We didn't do a good enough job then, but with proper preparation, we can do a better job now with more powerful technology. It starts with education, then training. If our nations educators are trained well enough, we can have confidence to face anything that is throw at us in the future. Thank you!