As we navigate the rapidly evolving landscape of financial technology, one development stands out as both a powerful defensive tool and a potential source of new vulnerabilities: Agentic AI. Unlike traditional AI systems that simply respond to commands, agentic AI can autonomously make decisions, learn from experiences, and take independent actions to protect financial systems.

The Cybersecurity Revolution in Financial Services
The financial sector has always been at the forefront of cybersecurity innovation out of necessity. With financial institutions managing trillions in assets and processing billions of transactions daily, they remain prime targets for increasingly sophisticated cyber threats. A successful attack can result not only in financial losses but also in devastating reputational damage and regulatory consequences.
Traditional cybersecurity approaches in finance have relied on rule-based systems, manual monitoring, and reactive measures. While these have served as a foundation, they increasingly struggle to keep pace with the volume, velocity, and sophistication of modern cyber threats. This is where agentic AI is creating a paradigm shift in how we approach cybersecurity.
According to recent research, 75% of financial firms surveyed by the Bank of England in 2024 reported already using AI in some capacity, with cybersecurity being one of the primary applications. This adoption is accelerating, with 76% of financial organizations planning to implement agentic AI systems within the next 12 months.
How Agentic AI Is Transforming Financial Cybersecurity
Agentic AI represents a fundamental evolution from earlier AI implementations. While traditional AI agents might perform specific tasks like monitoring network traffic or flagging suspicious emails, agentic AI systems can autonomously detect threats, make decisions about how to respond, and take protective actions with minimal human intervention.
Real-time Threat Detection and Response

One of the most powerful applications of agentic AI in financial cybersecurity is its ability to provide continuous, real-time monitoring of systems for suspicious activities. These systems process vast amounts of data, identifying patterns and anomalies that would be impossible for human teams to detect.
For example, JPMorgan Chase, has implemented agentic AI systems that can detect unusual transaction patterns across millions of accounts simultaneously. These systems don’t just flag potential issues — they can take immediate action to prevent fraud, such as temporarily freezing suspicious transactions until they can be verified.
Automated Risk Assessment
Agentic AI is transforming how financial institutions evaluate potential vulnerabilities across their networks and applications. These systems can continuously scan for weaknesses, prioritize responses based on severity and potential impact, and dynamically adjust security protocols in response to emerging threats.
Citigroup has established an AI governance board that actively reviews AI-driven decisions for fairness and bias mitigation, ensuring that automated risk assessments remain accurate and unbiased. This approach allows for more comprehensive security coverage while reducing the burden on human security teams.
Predictive Analytics and Proactive Defense
Perhaps the most significant advantage of agentic AI in cybersecurity is its ability to move from reactive to proactive defense. By analyzing patterns and historical data, these systems can anticipate potential attack vectors and strengthen defenses before attacks occur.
Barclays has adopted a human-in-the-loop model for its AI-driven security systems, where AI predictions about potential threats are reviewed by security experts before major defensive actions are taken. This hybrid approach combines the speed and pattern recognition capabilities of AI with human judgment and contextual understanding.
Multi-Agent Cybersecurity Ecosystems

Advanced financial institutions are now deploying entire ecosystems of specialized AI agents that work together to protect their systems. One AI agent might focus on threat detection, another on incident response, while a third engages in predictive analysis of potential future threats.
This collaborative approach mirrors how human security teams operate but at a scale and speed that would be impossible for human analysts alone. For instance, a leading financial institution implemented a multi-agent system with specialized components:
- Data Sources Agent: Collects information from network traffic, logs, and threat feeds
- User Behavior Analysis Agent: Monitors for abnormal user behavior
- Threat Intelligence Agent: Gathers information on emerging cyber threats
- Incident Response Strategy Agent: Develops response plans for detected threats
The result was a significant reduction in fraud losses and enhanced protection for millions of daily transactions.
New Challenges in the Age of Agentic AI
While agentic AI offers powerful new defensive capabilities, it also introduces new challenges and potential vulnerabilities that financial institutions must address.

The Shadow AI Problem
A phenomenon called “shadow AI”—the unsanctioned use of AI tools by employees within organizations—is emerging as a significant security concern. Much like shadow IT, this refers to employees using public AI models for data analysis or AI-powered coding assistants without proper vetting.
Financial institutions must be particularly vigilant about this risk, as employees might inadvertently input sensitive financial data into public AI models, potentially exposing confidential information. According to IBM addressing these risks requires “a mix of clear governance policies, comprehensive workforce training, and diligent detection and response.”
New Attack Surfaces
The interconnected nature of agentic AI systems introduces new vulnerabilities that cybercriminals are already attempting to exploit. As Nicole Carignan, VP of strategic cyber AI at Darktrace, points out, “multi-agent AI systems, while offering unparalleled efficiency for complex tasks, will introduce vulnerabilities such as data breaches, prompt injections, and data privacy risks.”
Financial institutions must recognize that their AI systems themselves can become targets of attacks, requiring new approaches to securing these critical components of their cybersecurity infrastructure.
Accountability and Transparency Challenges
As AI agents become more autonomous in their decision-making, questions about accountability and control become increasingly important. The “black box” nature of some AI systems makes it difficult to explain their decisions to regulators, customers, or internal auditors.
Paul Davis, CEO of Bank Slate, emphasizes that “human oversight is still needed to oversee inputs and review the decisioning process. You have to monitor for AI’s blind spots in areas such as risk assessment and crisis management.”
Building Cyber Resilience with Agentic AI
Despite these challenges, financial institutions can take specific steps to harness the power of agentic AI while building robust cyber resilience.
Establishing Robust AI Governance
Financial institutions leading in this space have established comprehensive governance frameworks for their AI systems. JPMorgan Chase and HSBC have appointed Chief AI Risk Officers to oversee responsible AI usage, while Citigroup’s AI governance board actively reviews AI-driven decisions.
These governance structures ensure that while AI systems can operate autonomously, proper oversight mechanisms, accountability frameworks, and transparency requirements are in place. This approach aligns with the EU AI Act, which categorizes AI systems into different risk levels and establishes governance requirements accordingly.
Implementing Human-in-the-Loop Models
The most effective implementations of agentic AI in financial cybersecurity maintain a balance between automation and human oversight. Barclays’ approach of keeping humans involved in reviewing AI-generated security recommendations before major actions are taken represents a thoughtful middle ground.
Continuous Learning and Adaptation
The most resilient cybersecurity systems combine the strengths of both AI and human intelligence in a continuous learning loop. AI systems detect patterns and anomalies at scale, while human experts provide context, judgment, and strategic direction.
This hybrid approach allows financial institutions to respond to emerging threats more effectively than either AI or human teams could accomplish alone. As threats evolve, both the AI systems and human teams learn and adapt together, creating a continuously improving security posture.
The Future of Financial Cybersecurity
Looking ahead to 2026 and beyond, several trends will shape how agentic AI continues to transform cybersecurity in financial services.
From Chatbots to Autonomous Agents
We’re seeing a clear trend away from simple chatbot interfaces towards more sophisticated, autonomous AI agents in security operations. These agents will be capable of not just detecting threats but also responding to them in real-time, often without human intervention.
This shift will raise important questions about accountability and control that financial institutions must address proactively. As these AI agents become more autonomous, ensuring their decision-making processes are transparent, auditable, and aligned with organizational policies will be essential.
AI in Software Security
By 2027, at least 80% of developers in financial organizations will be using AI-powered coding tools in some capacity. While these tools can significantly speed up development and help identify bugs, they also introduce new security considerations.
Software developers will need to be vigilant about potential biases or errors introduced by AI coding assistants, as well as the possibility of cyber attacks targeting these AI systems themselves. Implementing a “trust and verify” approach to AI-generated code will be critical for maintaining security.
Evolving Regulatory Landscape
As agentic AI becomes more prevalent in financial cybersecurity, regulatory frameworks will continue to evolve. The EU AI Act represents just the beginning of what will likely be a comprehensive regulatory approach to AI in financial services.
Financial institutions should prepare for increased scrutiny of their AI systems, particularly those used for cybersecurity. Demonstrating responsible AI usage, maintaining appropriate human oversight, and ensuring transparency in AI decision-making will be key to regulatory compliance.
Conclusion
Agentic AI represents both the next frontier in cybersecurity defense and a new domain of potential vulnerability for financial institutions. Its ability to autonomously detect threats, make decisions, and take protective actions offers unprecedented capabilities for defending against increasingly sophisticated cyber attacks.
However, realizing these benefits requires thoughtful implementation, robust governance, and a balanced approach that combines the strengths of AI and human expertise. Financial institutions that get this balance right will not only enhance their security posture but also build greater trust with customers and regulators.
As we prepare for the Point Zero Forum rum 2025, cybersecurity and the role of agentic AI will undoubtedly be central to our discussions about the future of financial services. The forum’s focus on establishing resilient policies, infrastructure, and innovation aligns perfectly with the cybersecurity challenges and opportunities presented by agentic AI.
Remember that building cyber resilience is not a destination but a journey—one that requires continuous adaptation, learning, and collaboration across the financial ecosystem. As I often say, “We are at the beginning of a marathon. It’s not a sprint.” The most successful institutions will be those that approach agentic AI in cybersecurity with both enthusiasm for its potential and thoughtfulness about its implementation.
I look forward to continuing this conversation at the Point Zero Forum in Zurich and exploring how we can collectively harness the power of agentic AI to build a more secure and resilient financial system.
Oliver Bussmann is a global technology thought leader and ambassador to the Point Zero Forum. With extensive experience as a former Group CIO at UBS and SAP, he advises financial institutions on digital transformation strategies and emerging technologies.
Please note that this newsletter reflects Bussmann Advisory’s and Oliver Bussmann’s personal views and not those of any organization we are involved with. This newsletter is for educational purposes only and none of its content should be construed as investment or financial advice of any kind. More information on www.bussmannadvisory.com.
Image Credits: OpenAI