
The release of OpenAI’s ChatGPT helped push generative artificial intelligence (AI) into the enterprise mainstream. Since then, organizations have grappled with testing virtual chatbots and large language models (LLMs) to determine where these technologies could help.
Now, the latest iteration of AI is poised to create additional disruptions.
Dubbed agentic AI, this next version of artificial intelligence takes generative AI technologies and platforms to the next level by creating systems that can autonomously conduct various functions, including making decisions on actions or tasks without human intervention.
While many tech firms that have previously invested in AI are working on agentic AI, Microsoft brought significant attention to the topic during the company’s Ignite conference in November 2024. Following that announcement, Gartner released a report predicting that nearly a third of enterprise applications will use the technology by 2028, and that 15 percent of everyday decisions will be made by these autonomous AI agents within three years.
Agentic AI Cybersecurity Concerns
Whether or not agentic AI lives up to the hype associated with artificial intelligence technologies is still wide open for debate. Some organizations, however, are beginning to evaluate what these autonomous agents can do to further automate various tasks, especially in the cybersecurity space. This includes bringing additional automation to tasks such as vulnerability detection and manual testing of security systems.
While organizations and their security teams are seeking to automate more manual tasks, cyber professionals are beginning to raise concerns over what the technology can do to further malicious operations.
Since ChatGPT’s release, cybersecurity experts and government agencies have warned about how cybercriminals and threat actors have used generative AI to create malware, develop phishing schemes and “poison” online information. The same is expected with agentic AI.
A report published by security vendor Malwarebytes earlier this year noted that agentic AI could lead to more sophisticated and targeted ransomware threats; these agents could work to circumvent security measures and speed up attack times.
“Malicious AI agents might also be tasked with searching out and compromising vulnerable targets, running and fine-tuning malvertising campaigns or determining the best method for breaching victims,” according to Malwarebytes.
Whether used for defense or offense, these agentic AI developments are expected to have major cybersecurity implications, which will affect tech and security pros tasked with assessing corporate networks and protecting company data.
“The autonomous nature of agentic AI introduces significant risks around control and accountability, as these systems can make rapid changes without traditional change management oversight,” J. Stephen Kowski, field CTO at SlashNext, recently told Dice. “Security concerns are paramount, as malicious actors could potentially exploit these AI agents to automate attacks, create unauthorized system changes or execute harmful scripts at unprecedented speed and scale. Organizations must implement robust security controls and monitoring systems to protect against unintended consequences and deliberate exploitation of AI agents operating within their environments.”
For cybersecurity professionals watching this space, keeping track of generative AI and now agentic AI developments can help ensure that their skill sets remain current, which is important as future employment and career advancement are likely based on deep knowledge of these tools and platforms.
Understanding Agentic AI: Why It Matters
While companies such as Microsoft, Google and others are rushing to put more of these agentic AI technologies into the market or incorporate autonomous agents within existing products, the technology itself does not have a standard, universal definition.
Without that standard definition, it’s challenging to understand the full implications of agentic AI, but that should not stop security professionals from trying to grasp its potential, said David Benas, principal security consultant at Black Duck.
“Similar to ‘traditional’ technology, users of scientific simulation software should have a grasp on how it works, if only to understand its limitations and the assumptions it needs to make in order to be effective, performant or both,” Benas told Dice. “Whether there's an absolute need rather than a should is a different story, though, and chances are these tools will come into widespread use for some time before their users well understand them.”
The future adoption of agentic AI is hard to calculate, but if organizations begin implementing these autonomous agents, the difference between those who understand the technology and those who do not will widen significantly, Benas added.
“I think agentic AI and really just generative AI in its more advanced forms will be a paradigm change for the security workforce,” Benas added. “There will be a huge schism between those with the skills to use or implement this technology and those who don't. And the most effective of the AI-skill-having workforce will be those who understand how it works. If this is a new skill or an extension of the creative malice underlying existing, effective security personnel, I suppose, is going to be up to the individual.”
For Nicole Carignan, senior vice president for security and AI strategy at Darktrace, the best way to understand and anticipate what agentic AI can or will do is to look at the organizations’ data, including governance, classification and posture management. These are the core foundations for future autonomous agents.
“It's important to note that not all agentic systems are equal—some are based on large language models with expansive capabilities and external data access, while others are smaller, local models with finite permissions and clearer boundaries,” Carignan told Dice. “The latter will be easier to secure in the short term, especially when they support data privacy, residency, and operate within defined parameters. As these systems evolve, security teams will need to tailor their approach: Applying strong data governance, setting strict API and communication boundaries, and ensuring agents are not self-discoverable or over-permissioned. Understanding the architecture and behavior of each agentic system will be critical to securing them effectively.”
With agentic AI remaining in the developmental stages, the technology is expected to have limited scope at first, giving security pros some time to fully understand its potential, noted Kris Bondi, the CEO and co-founder of security firm Mimoto.
“While AI agents have a limited scope in where they can be used effectively, their use will still help reduce the volume of potential security threats that a security team will need to address themselves,” Bondi told Dice. “In theory, this would enable security pros to have more time to analyze more complex threats.”
Preparing Agentic AI Security
For cyber professionals thinking about how malicious actors might use agentic AI to target networks and data, Guy Feinberg, growth product manager at Oasis Security, believes that these agents are vulnerable to deceit—much like their human counterparts.
While attackers use social engineering to trick people, they can also prompt AI agents into taking malicious actions. This means that the real risk is not the AI itself, but the fact that organizations don’t manage these non-human identities (NHIs) with the same security controls as human users.
“Manipulation is inevitable. Just as we can’t prevent attackers from tricking people, we can’t stop them from manipulating AI agents,” Feinberg told Dice. “The key is limiting what these agents can do without oversight. AI agents need identity governance. They must be managed like human identities, with least privilege access, monitoring, and clear policies to prevent abuse. Security teams need visibility. If these NHIs were properly governed, security teams could detect and block unauthorized actions before they escalate into a breach.”
For organizations and their security teams thinking about Agentic AI, Feinberg offers three protections to consider:
- Treat AI agents like human users. Assign only the permissions they need, and continuously monitor their activity.
- Implement strong identity governance. Track which systems and data AI agents can access, and revoke unnecessary privileges.
- Assume AI can be manipulated. Use security controls that detect and prevent unauthorized actions… just as you would with phishing-resistant authentication for humans.
Developing Skills for the Agentic AI Era
For Darktrace’s Carignan, the trajectory of generative AI and agentic AI is following that of early cloud computing adoption: concerns about interconnectivity and control that eventually gave way to acceptance by organizations.
The same learning curve for security professionals is also likely to apply to AI: Securing its use starts with understanding and governing the data it relies on. With that foundation, agentic AI can be deployed securely, in much the same way as cloud computing. These changes, however, will require acquiring new skills as well as deeper integration between IT and cybersecurity teams.
“This shift calls for tighter integration between cybersecurity and computer science, especially in education, where the two are still often siloed. Security teams must experiment with machine learning to understand how to secure it effectively, just as IT and security teams must work side by side to manage and orchestrate agentic systems,” Carignan added. “In the future, AI can’t be seen as a separate function or tool—it will require a deeply integrated approach across an entire organization to harness its power safely and securely.”