In recent years, AI assistants like ChatGPT, Microsoft Copilot, and Google Gemini are rapidly becoming embedded in everyday business workflows. Employees are using them to draft emails, summarize reports, analyze data, and even assist with decision-making. While productivity is rising, so is a new class of risk: prompt injection and AI misuse within sensitive workflows.
In this blog, we’ll explore the mechanics of direct and indirect prompt injection, examine recent real-world examples from 2025–2026, and demonstrate why Human Risk Intelligence (HRI) is now essential to proactively defend against AI-era human vulnerabilities.
What is Prompt Injection?
Prompt injection is a manipulation technique where hidden or malicious instructions are embedded into content that an AI processes, causing it to behave in unintended ways, such as leaking data, bypassing safeguards, or performing unauthorized actions. Prompt injection attacks come in two main forms:

- Direct Prompt Injection
This occurs when a malicious prompt overrides safeguards. The attacker (or a misguided user) directly enters malicious instructions into the AI’s input field, attempting to override the system’s built-in rules, safety guardrails, or developer instructions. Classic jailbreak-style attacks fall into this category (e.g., DAN prompts, roleplay overrides).
- Indirect Prompt Injection
The attacker never speaks directly to the AI. Instead, they hide malicious instructions in external content that the LLM will later process as trusted context, such as through retrieval-augmented generation (RAG) pipelines, file uploads, emails, web pages, documents, or shared drives. Malicious instructions are embedded in data sources the AI later retrieves; these get concatenated into the full prompt, causing the model to follow them as if they were part of the task. Indirect prompt injection scales massively in enterprises: a single poisoned email or shared document can compromise AI interactions for many users across the organization, often without any visible warning.
Real-World Examples of Prompt Injection from 2025–2026
- February 2026 Microsoft Copilot bug
The tool began summarizing confidential emails (including drafts and sent items marked “confidential”) without proper authorisation, exposing enterprise data for weeks until patched. - EchoLeak zero-click vulnerability (CVE-2025-32711)
Attackers could exfiltrate sensitive Microsoft 365 data (chats, OneDrive files, Teams messages) via Copilot without any user interaction. - Real-world indirect prompt injection (December 2025)
Attackers bypassed an AI-powered ad review system by embedding malicious instructions in a webpage, forcing the AI to approve scam ads.
These examples aren’t isolated lab demonstrations, they stem from tools deeply integrated into daily operations where staff routinely feed AI assistants sensitive HR records, financial spreadsheets, or client contracts for quick insights.
The Statistics: AI Risk is Already Here

Several recent research reports highlight the scale of AI-related risks:
- Latest data shows that 77% of employees have pasted sensitive company information into generative AI tools, often using personal accounts. Another 39.7% of all AI interactions now involve sensitive data.
- 75% of knowledge workers now use AI tools at work, often bringing their own tools without oversight.
- 38% of employees admit to sharing sensitive data with AI tools, frequently without employer awareness.
- The volume of sensitive data entered into AI tools has increased by 485% year-on-year.
- Nearly 60% of employees use unapproved (“shadow AI”) tools at work.
- Prompt injection remains the #1 vulnerability in the OWASP LLM Top 10 for 2025/2026.
The trend is clear: AI adoption is accelerating faster than organizations can secure it.
Human Risk Intelligence: Proactively Defending Against Evolving AI Threats
Traditional annual security awareness training is no longer sufficient against threats that evolve daily, especially AI-driven ones like prompt injection. Human Risk Intelligence (HRI) fundamentally changes the approach by shifting from reactive, one-size-fits-all programs to continuous, predictive, and adaptive defense focused on the human layer. In the context of AI threats, HRI delivers:

- Role-based, AI-specific training delivered just-in-time and tailored to actual behaviours. For example, HR staff receive modules on safe handling of employee data in AI tools, while finance teams get guidance on avoiding prompt injection in budget analysis workflows.
- Automated policy enforcement for AI usage rules, ensuring consistent adherence without manual oversight.
- Continuous monitoring of emerging risks, so training and interventions evolve as fast as the threats, whether it’s new indirect injection techniques or AI-enhanced social engineering.
By making human risk visible, quantifiable, and actionable in real time, HRI turns employees from the weakest link into a proactive frontline defense, especially critical as AI tools become embedded in sensitive workflows.
The Future of AI Security is Human-Centric
AI assistants are not just tools; they are amplifiers of human behaviour. That means your biggest risk is not the AI itself, but how your people use it.
When AI is already in your workplace, the question is: are you managing the human risk that comes with it? Discover how we help you identify, measure, and reduce AI-driven human risk.
Explore the demo hub to see our security awareness solution and the wider human risk suite in action.
Subscribe to newsletter
Discover how professional services firms reduce human risk with usecure
See how IT teams in professional services use usecure to protect sensitive client data, maintain compliance, and safeguard reputation — without disrupting billable work.
Related posts
Explore more insights, updates, and resources from usecure.


.png)
.png)
