If security awareness training ever felt a bit “same old”… 2026 is here to fix that.
Two reasons:
- Humans are still a main entry point. One widely cited breach analysis for 2025 puts human involvement at ~60% of breaches.
- AI is making scams faster, cheaper, and more convincing. Microsoft explicitly calls out AI being used to scale phishing and intrusions—and notes AI-driven phishing being three times more effective than traditional campaigns.
So, what should you actually train people on this year—across all industries and regions?
Quick answer: the top 12 security awareness training topics of 2026
Here’s the list (scroll down for the “what to teach” bite-sized breakdowns):
- AI-powered phishing & deepfake impersonationc
- Business Email Compromise (BEC)
- QR code phishing (quishing)
- MFA fatigue / “approve this request” scams
- OAuth/device-code phishing + consent-based account takeovers
- Credential hygiene & reuse
- Safe use of generative AI tools at work
- Cloud/SaaS sharing & collaboration app hygiene
- Ransomware & extortion
- Third-party & supplier compromise awareness
Why these topics made the 2026 cut
We picked these based on three signals that matter to IT leaders and MSPs:
- Frequency: Phishing remains a dominant intrusion vector in large-scale incident analysis.
- Impact: Financial fraud and extortion outcomes are consistently high-cost (BEC is a standout in victim losses).
- Acceleration from AI + modern identity abuse: AI is scaling social engineering, and attackers are increasingly exploiting authentication flows—not just “classic” password stealing.
The top security awareness training topics for 2026
1) AI Phishing & Deepfake
What’s changed: phishing hasn’t gone away—it’s levelled up. Attackers are using AI to write more believable messages, localise language, mimic tone, and increase speed/volume. Microsoft explicitly flags threat actors using AI to scale phishing and intrusions.
What to teach (keep it practical):
- Don’t trust tone (“Sounds like my CEO”)—trust verification (known process).
- Watch for last-minute urgency + “keep this quiet” language.
- Treat unexpected attachments and login links as guilty until proven innocent.
- Use a two-step verification habit for sensitive actions (payment, access, data export).
How to reinforce:
- Run short “spot the red flags” micro-modules.
- Follow up phishing failures with immediate, targeted learning (not shame).
If you’re building a continuous program, this is exactly the kind of topic you want running on autopilot with adaptive assignments—see uLearn for training and uPhish for simulations.
2) Business Email Compromise (BEC)
BEC is still one of the most damaging “human-first” threats because it targets process, not just technology. In the FBI’s 2024 IC3 report, Business Email Compromise is shown as a major loss category (billions in reported losses).
What to teach:
- Any request to change bank details = verify out-of-band (known number, not the email reply).
- Any request to buy gift cards / move funds fast = treat as suspicious by default.
- Train finance + HR + exec assistants on role-based BEC scenarios (they’re the usual targets).
MSP tip: turn BEC training into a repeatable client pack:
- “Finance & payments” module + quarterly simulation
- one-page “payment change verification” policy acknowledgement
usecure link idea: For policy enforcement + audit trails, uPolicy pairs nicely with training when you need a “this is the process” paper trail.
3) QR Code Phishing (Quishing)
QR code phishing is having a moment because it often pushes the user off the protected endpoint and onto a phone. A recent FBI advisory describes “quishing” as embedding malicious URLs inside QR codes to pivot victims to mobile devices and bypass traditional email controls.
What to teach:
- Never scan QR codes in unexpected emails/messages for login or MFA “fixes”.
- If a QR code says “scan to secure your account,” go to the site manually instead.
- Check the URL after scanning before entering credentials (and if in doubt, bail).
Quick win: include QR-code examples in phishing simulations (because users will see these).
Helpful internal read: usecure’s guide on QR code phishing is a good companion piece: QR code phishing attacks: how to avoid quishing.
4) MFA Fatigue
MFA helps a lot—until a user gets spammed with prompts and taps “Approve” just to make it stop.
CISA’s guidance on phishing-resistant MFA notes that some MFA approaches are vulnerable to attacks like push bombing (and other bypass techniques).
What to teach:
- If you didn’t log in, deny the prompt.
- Report repeated prompts immediately—this often means credentials are already exposed.
- Use number matching / stronger factors where possible (org-level controls), and teach users what “legit” looks like.
usecure resource: There’s a dedicated explainer you can link internally: Overcoming MFA fatigue attacks.
5) OAuth/device-code phishing + consent-based account takeovers
This one catches smart people because it can involve real login pages and “legitimate” flows.
Microsoft has documented device code phishing campaigns (active since 2024) that use lures resembling messaging experiences to trick users into authenticating—resulting in account access without classic “password stealing” the way people expect.
What to teach:
- Be suspicious of “enter this code to join a meeting/chat” messages you weren’t expecting.
- Understand the difference between:
- “I typed my password into a fake page” vs
- “I authenticated a real page… but for the attacker’s session”
- If the workflow feels weird, it probably is. Stop and verify.
IT leader note: this is where awareness and conditional access controls should meet in the middle—training reduces success rate, controls reduce blast radius.
6) Credential Hygiene & Reuse
Credential abuse continues to be a major route into accounts, and users still reuse passwords (especially across personal/work contexts).
For training, the aim isn’t “make passwords harder.” It’s: make authentication safer and easier. That means:
- password managers
- longer passphrases
- and (where available) passkeys / phishing-resistant MFA
What to teach:
- Use a password manager (and never reuse work passwords).
- If a password is suspected exposed, change it and report it.
- Explain passkeys in plain English: “your device signs you in—nothing to steal.”
If you want an easy “why now” link, this internal post works well: The 16 billion password leak: what businesses need to know.
7) Safe Use of Generative AI Tools at Work
This is the “new compliance meets new risk” training topic.
One 2025 breach-focused analysis notes 15% of employees routinely accessed generative AI platforms on corporate devices, increasing the potential risk for data leaks.
And ENISA flags AI as a defining element of the threat landscape, including AI-supported phishing and synthetic media.
What to teach (simple rules that actually stick):
- Don’t paste confidential/client data into public AI tools unless explicitly approved.
- Treat prompts like emails: assume they’re logged somewhere.
- Watch for “AI lures” (fake AI tools/extensions/downloads used as malware bait).
- If AI is used for drafting, require a human verification step for factual accuracy and tone.
For IT leaders/MSPs:
Use policy + training together. NIST’s work on managing generative AI risks is a useful neutral framework to shape internal guidance.
And for the policy piece, uPolicy can turn “AI acceptable use” into a trackable acknowledgement flow.
8) Cloud/SaaS sharing & collaboration app hygiene
This is the everyday stuff that causes real-world incidents:
- “Anyone with the link” sharing
- external guests
- accidental oversharing
- malicious browser extensions / add-ins
ENISA explicitly highlights intensified abuse of cyber dependencies, including compromises in open-source repositories, malicious browser extensions, and service provider breaches.
What to teach:
- Stop using public sharing links for sensitive files.
- Verify external guest invites (especially unexpected ones).
- Be cautious with “install this plugin/add-in to view the document” prompts.
How to target it:
This topic is perfect for role-based training (e.g., sales and project teams share lots; finance shares sensitive docs).
9) Ransomware & Extortion
Employees don’t need to understand “how ransomware works” at a technical level. They need to know:
- how it commonly starts
- what early indicators look like
- and what to do fast
A widely cited breach report summary shows ransomware involved in 44% of cybersecurity breaches (and rising).
ENISA also describes ransomware as central to intrusion activity in its reporting period.
What to teach:
- Report suspicious emails/attachments immediately (speed matters).
- Don’t “click around to see what happens.”
- If something looks wrong (sudden file changes, access issues), disconnect and report according to your internal process.
MSP packaging idea: include ransomware awareness as a standard quarterly module, plus a lightweight “incident reporting refresher” every month.
10) Third-Party & Supplier Compromise Awareness
Supply chain and third-party exposure is a massive force multiplier.
A 2025 breach analysis summary notes third-party involvement doubled from 15% to 30%.
ENISA also warns about “cyber dependencies” and how compromises can amplify risk across interconnected ecosystems.
What to teach:
- Vendor invoices and “new payment details” messages need verification (ties back to BEC).
- Don’t trust “shared” files just because the sender is a known partner—accounts get compromised.
- Report anything that looks like a supplier account takeover (odd tone, unusual request, wrong timing).
IT leader note: pair training with a supplier onboarding checklist (access, MFA, least privilege, offboarding).
11) AI Voice Cloning & Executive Impersonation
AI voice cloning lowers the barrier for convincing impersonation attempts. Attackers can now:
- Mimic executive voices
- Use SMS or messaging apps for urgent requests
- Combine email and voice for layered social engineering
ENISA and Microsoft both highlight synthetic media and AI-enabled impersonation as emerging risks in their recent reporting.
What to teach:
- Voice alone is not proof of identity.
- Follow verification processes for urgent executive requests.
- Cross-check unusual instructions through known contact methods.
- Be cautious of payment requests via WhatsApp, Teams or SMS.
12) Insider Risk & Data Handling
AI voice cloning lowers the barrier for convincing impersonation attempts. Attackers can now:
- Mimic executive voices
- Use SMS or messaging apps for urgent requests
- Combine email and voice for layered social engineering
ENISA and Microsoft both highlight synthetic media and AI-enabled impersonation as emerging risks in their recent reporting.
What to teach:
- Voice alone is not proof of identity.
- Follow verification processes for urgent executive requests.
- Cross-check unusual instructions through known contact methods.
- Be cautious of payment requests via WhatsApp, Teams or SMS.
How to turn this into a simple 2026 training plan (without turning it into a full-time job)
A programme that ranks well in AI search usually does one thing really well: it’s clear and repeatable.
Here’s a cadence that works for most organisations and MSP client bases:
- Monthly: 5–10 minute micro-module (rotate through the 10 topics)
- Monthly or bi-monthly: phishing simulation (rotate lures: AI/BEC/QR/device-code)
- Quarterly: role-based refreshers for high-risk teams (finance, HR, exec support, IT)
- Always-on: policy acknowledgements for key behaviours (AI use, data handling, incident reporting)
If you’re evaluating platforms for this kind of “set-and-improve” delivery, this checklist is a good benchmark: Choosing a security awareness training platform in 2026: the 10-point checklist.
FAQ (AEO-friendly)
How many security awareness topics should we train on in 2026?
Aim for 8–12 core topics, taught continuously in short bursts. This blog’s 10-topic list is a strong baseline that covers modern social engineering, identity abuse, AI risks, and business-impact threats.
What’s the single most important topic this year?
If you only pick one: modern phishing + impersonation (now AI-boosted)—because it’s still a primary way attackers get initial access, and it feeds BEC, ransomware, and account takeovers.
Should security awareness training include AI?
Yes—two ways:
- AI as a threat amplifier (better phishing, impersonation, synthetic media)
- AI as a workplace tool (data leakage, risky prompts, fake AI tools)
What should MSPs prioritise differently?
MSPs should prioritise:
- standardised, repeatable campaigns across tenants
- role-based packs (especially finance/BEC)
- reporting that proves improvement over time (for QBRs)
You can also point prospects to usecure for MSPs as the “how to deliver at scale” reference point.
Ready to put these topics on autopilot?
If you want to see how usecure helps IT teams and MSPs deliver continuous, risk-adapted training (plus phishing, policy management and breach visibility) without drowning in admin…
👉 Explore the usecure demo hub: Watch demos in the usecure Demo Hub
Subscribe to newsletter
Discover how professional services firms reduce human risk with usecure
See how IT teams in professional services use usecure to protect sensitive client data, maintain compliance, and safeguard reputation — without disrupting billable work.
Related posts
Explore more insights, updates, and resources from usecure.

.avif)