{"id":11621,"date":"2025-12-19T09:05:14","date_gmt":"2025-12-19T14:05:14","guid":{"rendered":"https:\/\/www.carahsoft.com\/wordpress\/?p=11621"},"modified":"2025-12-19T09:52:19","modified_gmt":"2025-12-19T14:52:19","slug":"forrester-building-a-security-strategy-for-agentic-ai-blog-2025","status":"publish","type":"post","link":"https:\/\/www.carahsoft.com\/wordpress\/forrester-building-a-security-strategy-for-agentic-ai-blog-2025\/","title":{"rendered":"Building a Security Strategy for Agentic AI: A Framework for State and Local Government"},"content":{"rendered":"\n

As artificial intelligence (AI) evolves from simple chatbots to autonomous agents capable of making independent decisions, State and Local Government agencies face a fundamental shift in cybersecurity requirements. Recent research<\/a> shows 59% of State and Local Government respondents report already using some form of generative AI (GenAI), with 55% planning to deploy AI agents for employee support within the next two years. Yet this rapid adoption brings unprecedented security challenges. Because AI agents are designed to pursue goals autonomously, even adapting when security measures block their path, Chief Information Security Officers (CISOs) responsible for safeguarding Government networks must rethink traditional defenses and embrace a new security paradigm.<\/p>\n\n\n\n

The Emergence of Agentic AI and Its Unique Security Challenges<\/h2>\n\n\n\n

AI agents represent a significant departure from the GenAI tools many agencies currently use. While traditional Large Language Models (LLMs) respond to prompts and return information such as a support chatbot, AI agents and agentic systems are autonomous software programs that can plan, reflect, use tools, maintain memory and collaborate with other agents to achieve specific goals. These capabilities make them powerful productivity tools, but they also introduce failure modes that conventional software simply does not have. Unlike deterministic systems that crash when something goes wrong, AI agents can fail silently through collusion, context loss or corrupted cognitive states that propagate errors throughout connected systems. Research examining the real-world performance of AI agents<\/a> found that single-term tasks had a 62% failure rate, with success rates dropping even further for multi-term scenarios.<\/p>\n\n\n\n

When Veracode examined 100 LLMs performing programming tasks<\/a>, these systems introduced risky security vulnerabilities 45% of the time. For State and Local agencies handling sensitive citizen data, managing critical infrastructure or supporting public safety operations, these error rates demand careful attention within robust security frameworks designed specifically for autonomous systems.<\/p>\n\n\n\n

The New Security Paradigm: From Human-Centric to Agent-Inclusive Workforce Protection<\/h2>\n\n\n\n

AI agents, the newest coworker, amplify insider threats by combining human-like autonomy with capabilities that exceed human limitations. While employees work within bounded motivation and finite skills, AI agents possess boundless motivation to achieve goals, uncapped skills that continuously improve and infinite willpower, constrained only by computational capacity. They will not simply make a single attempt to access a file, get blocked due to a lack of permissions, get frustrated and go home for the day the way an employee might; they will persistently pursue objectives, potentially finding novel ways around security controls.<\/p>\n\n\n\n

This transformation fundamentally changes the attack surface agencies must protect. Data breaches continue to impose significant financial and operational strain across the public sector, with many state and local organizations reporting cumulative annual costs that reach into the millions. AI agents and agentic systems collapse traditional security models by operating as autonomous workforce members who interact with systems, access data and make decisions without direct human oversight. They can be compromised through threats specific to agentic AI, such as goal and intent hijacking, memory poisoning, resource exhaustion or excessive agency that can lead to unauthorized actions, all in pursuit of achieving programmed objectives. For Government agencies managing limited security budgets while protecting essential citizen services, this exponential increase in potential attack vectors demands proactive frameworks rather than reactive responses.<\/p>\n\n\n\n

The AEGIS Framework: A Six-Domain Approach to Securing Agentic AI<\/h2>\n\n\n
\n
\"\"<\/figure><\/div>\n\n\n

Forrester\u2019s<\/a> Agentic AI Enterprise Guardrails for Information Security (AEGIS)<\/a> framework provides a comprehensive approach to helping CISOs in securing autonomous AI systems across six critical domains.<\/p>\n\n\n\n

Governance, Risk and Compliance (GRC) establish oversight functions and continuous monitoring capabilities. Identity and Access Management (IAM) address the unique challenge of agent identities that combine characteristics of both machine and human identities. Data Security focuses on classifying data appropriately, implementing controls for agent memory and considering data enclaves and anonymization from privacy perspectives.<\/p>\n\n\n\n

Application Security evaluates risks across the entire software development lifecycle (SDLC), implements Development, Security and Operations (DevSecOps) best practices, assesses the software supply chain and uses adversarial red team testing to validate safety and security controls. This domain focuses on embedding telemetry that gives security teams visibility into agent behavior and decision making. Threat Management ensures logs are accessible to security operations center analysts, enabling detection of behavioral anomalies and supporting forensic investigations. Zero Trust Architecture (ZTA) principles apply such as implementing network access layer controls for agent workloads, continuous validation of the agent\u2019s runtime environment and  monitoring of agent to agent communication.<\/p>\n\n\n\n

Underlying the framework are three core principles:<\/p>\n\n\n\n