As AI adoption accelerates globally, organizations are rapidly deploying advanced applications and building AI-ready data centers that demand real-time responsiveness, high performance, and robust security. A10 Networks (NYSE: ATEN) is addressing these evolving needs with a new set of AI-focused infrastructure and cybersecurity solutions, which will be showcased at Interop Tokyo 2025, themed “AI Steps into Reality,” from June 11–13.
Securing AI and LLM Workloads with New AI Firewall
A10 is introducing AI firewall capabilities designed to protect large language models (LLMs), whether they are custom-built or developed on commercial platforms such as OpenAI or Anthropic. These firewalls, built on edge-optimized, GPU-powered hardware, inspect prompt-level traffic to detect threats such as prompt injection attacks and sensitive data disclosures. They enforce AI-specific security policies and can be deployed across any infrastructure, adding an extra layer of protection for APIs and URLs that expose AI models. Organizations can also use A10’s capabilities to test models against known vulnerabilities and apply proprietary techniques to safeguard their inference environments.
Optimizing Performance in AI and LLM Environments
A10 continues to strengthen its performance optimization solutions for AI workloads by offloading compute-heavy processes like TLS/SSL decryption, caching, and traffic routing. This not only frees up resources for core inference tasks but also improves the responsiveness of AI applications. Additionally, A10 introduces predictive performance monitoring powered by GPU-accelerated appliances. This capability analyzes large volumes of data in real time to detect network congestion or capacity issues early, allowing IT teams to take proactive action and reduce unscheduled downtime.
Delivering Resilience and Operational Intelligence
By combining threat intelligence, AI-native network architecture, and simplified management tools, A10 helps enterprises build secure and high-performing environments for AI applications. These capabilities are especially critical as enterprises scale their use of LLMs both on-premises and in the cloud.
“Enterprises are deploying and training AI and LLM inference models on-premises or in the cloud at a rapid pace,” said Dhrupad Trivedi, President and CEO of A10 Networks. “New capabilities must address three key challenges: latency, security, and operational complexity. With over 20 years of expertise, we’re expanding our capabilities to deliver resilience, high performance, and security for AI and LLM infrastructures.”
For more information on A10 Networks’ AI infrastructure solutions, visit: www.a10networks.com
