
Artificial intelligence has moved into classrooms faster than most schools can keep up. Teachers use it to grade assignments, plan lessons, and spark student creativity. Students use it for research, writing help, and coding projects. The technology’s presence is already reshaping how education works day to day. But with that rapid adoption comes a growing set of risks that many schools aren’t prepared to handle.
Keeper Security’s new report, AI in Schools: From Promise to Peril, reveals how quickly innovation has outpaced protection. It finds that the same tools fueling new learning opportunities are also opening fresh paths for cyberattacks and misuse.
Cyber Incidents in the Classroom
AI has become part of everyday classroom life. Keeper’s survey found that 86 percent of schools allow students to use AI tools, and 91 percent permit faculty to do the same.
The problem is that this widespread use is happening without much structure. Most schools rely on informal guidance rather than formal policy. Teachers and students are often left to interpret what “responsible use” means on their own.
Without guardrails, it’s no surprise that problems are cropping up. Keeper’s research shows that 41 percent of schools have already experienced AI-related cyber incidents. The list of issues is wide-ranging: phishing emails written by chatbots, student data accidentally fed into AI tools, and deepfake videos used to harass or mislead.
Some of these incidents are caught and contained quickly. Others slip by entirely, hidden by the very technology that created them. According to Keeper’s findings, 39% of schools may be experiencing AI-related threats without realizing it.
Awareness Doesn’t Equal Readiness
Most educators know AI brings risk—83 percent of leaders in Keeper’s survey said they’re aware of the dangers. But when pressed on specifics, only one in four said they felt confident identifying AI-enabled threats like deepfakes or synthetic phishing.
That gap points to a deeper problem: schools aren’t equipped to spot or stop emerging attacks. Few have training programs that explain how these threats work, and even fewer have the monitoring tools to catch them in real time.
“The challenge is not a lack of awareness, but the difficulty of knowing when AI crosses the line from helpful to harmful,” said Anne Cutler, Cybersecurity Evangelist at Keeper Security.
“The same tools that help a student brainstorm an essay can also be misused to create a convincing phishing message or even a deepfake of a classmate. Without visibility, schools struggle to separate legitimate use from activity that introduces risk.”
From Guidance to Governance
Managing AI safely in schools starts with governance. Keeper’s report argues for replacing ad-hoc guidelines with clear, enforceable frameworks that define how AI can be used safely. That means policies that set boundaries around data sharing, require transparency in coursework, and hold both staff and students accountable for responsible use.
“Policies provide a necessary framework that balances innovation with accountability,” Cutler said. “That means setting expectations for how AI can support learning, ensuring sensitive information such as student records or intellectual property cannot be shared with external platforms and mandating transparency about when and how AI is used in coursework or research. Taken together, these steps preserve academic integrity and protect sensitive data.”
Some institutions are already exploring models that blend policy with education. Cross-disciplinary digital ethics committees can review new AI tools before they’re deployed. Security-focused curricula can teach faculty and students how to recognize phishing attempts, deepfakes, and data leaks. And ongoing awareness programs can treat cybersecurity as a core element of learning.
Teaching AI Responsibly
AI is here to stay in schools, for better or worse. The challenge is making sure it’s used responsibly so it enhances learning without exposing students or institutions to unnecessary risk. That starts with expanding AI literacy to include cybersecurity awareness. Students should understand how these tools can be exploited.
Proactive policy can help turn AI from a potential threat vector into a hands-on lesson in digital responsibility. Training teachers to spot manipulation, teaching students to question what they see and hear, and giving administrators the tools to monitor activity without stifling innovation all build a culture of security. If schools can balance innovation with accountability, AI won’t just change how students learn it’ll change how they think about security in a digital world.