Asset or adversary? The intersection of AI and cyber security adds new layers to digital trust
Published: October 1, 2025

In May 2023, Samsung employees accidentally leaked confidential information by using ChatGPT to review internal code and documents. The breach prompted Samsung to ban generative AI tools like ChatGPT across the company. This is just one example of how AI intersects with cyber security. In this case, the convenience of AI tools led to a serious security lapse.
But how can we integrate AI tools into our workflows to improve our cyber security posture without exposing ourselves to vulnerabilities — or even creating new ones?
In the context of higher education, where institutions store extensive and high-value data, AI can be both an asset and an adversary. Its impact is both transformative and deeply complex.
“Every day, we face new security challenges,” says University of Toronto Acting Chief Information Security Officer Deyves Fonseca. “AI makes threat actors faster, deepfakes better and data theft more lucrative. But we’re also working every day on thinking outside the box, creating new solutions and using AI to our advantage.”
The double-edged sword of AI
AI’s ability to process enormous amounts of data and automate tasks has made it a powerful ally in cyber security. From scanning audit logs to flagging anomalies, AI tools are helping security teams work faster and smarter.
“We used to review logs manually, skimming through potentially millions of lines,” explains Jesse Beard, information security and privacy specialist. “Now AI does that for us and gives us a much shorter list of what we need to look at. It’s a game changer.”
AI can also accelerate crucial cyber security processes like risk assessments, firewall rule audits and documentation. But the same capabilities that make AI useful for defenders are also being exploited by attackers.
“The biggest difference I’ve noticed is that AI-generated phishing emails are much better,” Beard warns. “It’s scary how far they’ve progressed. The telltale signs — bad links, spelling mistakes — are gone. It’s tough to spot phishes now.”
New threats, familiar vulnerabilities
While AI may introduce new kinds of risks, their roots trace back to longstanding cyber security challenges. Data leakage, unauthorized access and unpredictable system behaviour are not unique to AI, but the speed and scale at which AI operates magnify their impact.
“New technology always brings with it new vulnerabilities,” says Carl Chan, manager, visibility and infrastructure security. “Some are genuinely new, but many are just old vulnerabilities presented in new ways. What’s dangerous with AI is that in the rush to implement AI-enhanced tools, best practices in protection are sometimes ignored or bypassed.”
Chan emphasizes that treating AI development like any other software — applying secure deployment practices and limiting access — can mitigate risks.
The higher ed context: Compliance and caution
In addition to aiding threat actors as much as defenders, AI can also be a vulnerability when integrated into complex networks and systems because it is often highly visible and centralized.
In the Canadian postsecondary context, compliance with privacy legislation such as Ontario’s Freedom of Information and Protection of Privacy Act (FIPPA) limits how AI can be integrated into systems and networks. Universities must tread carefully when deploying AI systems that interact with personal data.
“We can’t just let AI access all our data,” Beard explains. “The risks and impacts of exposure greatly outweigh the benefits — at least for now. So, integration is limited unless we can configure systems to restrict access appropriately.”
To manage these risks, universities are implementing AI tools in isolated environments or “sandboxes.” For example, ChatGPT is deployed in a dedicated tenant accessible only to university staff and faculty who opt in, without full integration into core systems.
AI on our side
As attackers start to use AI extensively, defenders must respond in kind. Fonseca notes that AI is being used to review machine logs and detect threats in real time. Beard describes how AI can identify malicious connection attempts to our systems and automatically block them — tasks that would take human teams hours or days.
“We’ll need to employ AI on our side, recognizing these attacks and can make adjustments on the fly,” Beard says.
This escalating dynamic is leading to what some call the “lethal trifecta:” AI systems that combine access to private data, the ability to communicate externally and exposure to untrusted content. These systems are high-risk targets and require robust safeguards.
Looking ahead: Awareness and resilience
Perhaps the most important shift AI has triggered is in public awareness. Chan observes that AI has helped convince many who previously thought, “Nobody would ever target me,” to take cyber security seriously.
“That increased awareness has helped our security,” he says. “Because ultimately the best cyber security defense starts with each user.”
The urgency to build resilient, adaptive defenses is only growing by the day. The future of cyber security in higher education will depend not only on technology, but thoughtful implementation, cross-campus collaboration and a commitment to continuous learning.

Artificial intelligence is transforming how we work, learn and protect information. During week one of Cyber Security Awareness Month, we’re thinking about how AI intersects with the security of our systems, networks and data. While it can be a powerful security tool, AI can also be exploited by attackers — and can even represent a vulnerability itself. Understanding how AI can be both an asset and an adversary is key to keeping the U of T community safe.