
Cyber-attacks have more than doubled worldwide in just four years, from 818 per organisation in 2021 to almost 2,000 per organisation this year, according to the World Economic Forum (WEF). It’s a staggering statistic. And small businesses are particularly exposed, now seven times more likely to report insufficient cyber-resilience than they were in 2022. Whether we like it or not, artificial intelligence(AI) tools have had a big role to play here, not just with the increasing volume of attacks but also the sophistication.
Risks are now emerging at every layer of the AI stack, from prompt injection and data leakage to AI-powered bot scraping and deepfakes. As a recent industry report reveals, attackers are now using large language models (LLMs) to craft convincing phishing campaigns, write polymorphic malware, and automate social-engineering at scale. The result is a threat environment that learns, adapts, and scales faster than human analysts can respond.
AI systems are built in layers, and each one brings its own weak spots. At the environment layer, which provides computing, networking and storage, the risks resemble those in traditional IT but the scale and complexity of AI workloads make attacks harder to detect. The model layer is where manipulation starts. Prompt injection, non-compliant content generation and data exfiltration are now among the top threats, as highlighted in the OWASP 2025 Top 10 for LLM Applications. The context layer, home to retrieval-augmented generation (RAG) databases and memory stores, has become a prime target for data theft. Meanwhile, at the tools and application layers, over-privileged APIs and compromised AI agents can give attackers the keys to entire workflows.
In other words, the attack surface is expanding in every direction, and with it, the need for smarter defences. The answer isn’t to abandon AI but to use AI to secure AI. So a comprehensive security framework needs to span the full AI lifecycle, protecting three essential layers: model infrastructure, the model itself, and AI applications. When security is embedded into business workflows rather than bolted on afterward, organizations gain efficient, low-latency protection without sacrificing convenience or performance.
Security teams are already deploying intelligent guardrails that scan prompts for malicious intent, detect anomalous API behaviour and watermark generated content for traceability. The latest generation of AI-driven security operations applies multi-agent models to analyse billions of daily events, flag emerging risks in real time and automate first-response actions. According to PwC’s Digital Trust Insights 2026 survey, AI now tops the list of investment priorities for Chief Information Security Officers(CISOs) worldwide, a sign that enterprises are finally treating cyber resilience as a learning system, not a static checklist.
Yet even as enterprises strengthen their defences, a new and largely self-inflicted risk is taking shape inside their own networks. It’s called shadow AI. In most organisations, employees are using generative tools to summarise reports, write code or analyse customers, often without official approval or data-governance controls. According to one report from Netskope, around 90 percent of enterprises now use GenAI applications, and more than 70 per cent of those tools fall under shadow IT. Every unmonitored prompt or unvetted plug-in becomes a potential leak of sensitive data.
Internal analysis across the industry suggests that nearly 45 percent of AI-related network traffic contains sensitive information, from intellectual property to customer records. In parallel, AI-powered bots are multiplying at speed. Within six months, bot traffic linked to data scraping and automated requests has quadrupled. While AI promises smarter, faster operations, it’s also consuming ever-greater volumes of confidential data, creating more to defend and more to lose.
Governments and regulators are beginning to recognise the scale of the challenge. Many AI governance rules all point to a future where organisations will be expected to demonstrate not only compliance, but continuous visibility over their AI systems. Security postures will need to account for model training, data provenance, and the behaviour of autonomous agents, not just network traffic or access logs. For many, that means embedding security directly into the development pipeline, adopting zero-trust architectures, and treating AI models as living assets that require constant monitoring.
Looking ahead, the battle lines are already being redrawn. The next phase of cybersecurity will depend on a dual engine – one that protects AI systems while also using AI to detect and neutralise threats. As machine-learning models evolve, so too must the defences surrounding them. Static rules and manual responses can’t keep pace with attackers who automate creativity and exploit speed. What’s needed is an ecosystem that learns as fast as it defends.
That shift is already underway. Multi-agent security platforms now coordinate detection, triage and recovery across billions of daily events. Lightweight, domain-specific models filter out the noise, while larger reasoning models identify previously unseen attack patterns. It’s an intelligence pipeline that mirrors the adversaries, only this one’s built for defence.
The future of digital security will hinge on collaboration between human insight and machine intuition. In practical terms, that means re-training the workforce, as much as re-architecting the infrastructure. Analysts who can interpret AI outputs, data scientists who understand risk, and policymakers who build trust through transparency are very much needed. The long game is about confidence, not just resilience. Confidence that the systems powering modern life are learning to protect themselves.
Because ultimately, AI isn’t the villain of this story. The same algorithms that make attacks more potent can also make protection more precise. The question for business leaders everywhere is whether they’ll invest fast enough to let intelligence, not inertia, define the next chapter of cybersecurity.
This article was originally published on Alizila written by Alizila Staff.
Alibaba Reports Solid Progress in AI + Cloud on the Strength of Its Full-Stack Capabilities
Alibaba Bets Big on Agentic AI: A Multi-Trillion Dollar Market Opportunity
1,397 posts | 492 followers
FollowAlibaba Clouder - March 5, 2018
Amuthan Nallathambi - August 24, 2023
VikashThakur - December 24, 2024
Alibaba Clouder - March 21, 2017
Yuriy Yuzifovich - November 8, 2022
CloudSecurity - April 21, 2026
1,397 posts | 492 followers
Follow
Container Compute Service (ACS)
A cloud computing service that provides container compute resources that comply with the container specifications of Kubernetes
Learn More
Container Service for Kubernetes
Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.
Learn More
Security Center
A unified security management system that identifies, analyzes, and notifies you of security threats in real time
Learn More
Tongyi Qianwen (Qwen)
Top-performance foundation models from Alibaba Cloud
Learn MoreMore Posts by Alibaba Cloud Community