FRAUDFORWARD

Agentic Banking: Designing the Trust Engine with Accountability for the AI Era

44 min

What's up fraud fighters, and welcome to Fraud Forward!

Today we’re talking about AI transaction monitoring and why it matters so much as agentic banking starts moving from concept to reality.

Because once AI agents in banking move beyond simple chat and into actions, payments, and customer instructions, the risk conversation changes fast.

This is no longer just about whether AI in banking feels helpful.

It's about whether institutions can actually monitor intent, trace authorization, support banking dispute resolution, and reduce fraud risk without losing the trust they’ve worked so hard to build.

In this episode, I sat down with Tyllen Bicakcic to break down what agentic banking actually means, why it is gaining traction, and why AI transaction monitoring is going to become one of the most important control layers in this entire conversation.

What I really appreciated about this discussion is that we did not stay at the hype level.

We talked about real use cases. Real banking friction. Real fraud questions. And the very real challenge of how banks can build safer banking with AI without creating new blind spots around governance, payment risk monitoring, and customer trust.

And the reality is this: If AI agents are going to influence transactions, then transaction monitoring in banking has to evolve right alongside them.

What you'll hear in this episode

  • What agentic banking actually means and how it differs from traditional AI in banking
  • Why AI agents in banking create new questions around intent, authorization, and accountability
  • How AI transaction monitoring can strengthen banking fraud prevention and AI fraud prevention
  • Why trust infrastructure in banking matters as much as innovation
  • How banks can think about AI governance in banking without rebuilding everything from scratch
  • Why banking compliance and AI must move together as agentic payments become more practical
  • How banking dispute resolution changes when an AI agent initiates or influences a transaction
  • Where real-time fraud monitoring, payment risk monitoring, and AI risk signals fit into the next phase of digital banking AI

You should listen to this episode if

  • You lead fraud, risk, compliance, BSA, AML, or payments programs at a bank or credit union
  • Your team is evaluating agentic banking or AI agents in banking
  • You are thinking about how AI transaction monitoring should work in real financial services AI environments
  • You want a better understanding of AI payment authorization and dispute handling in an AI-driven world
  • You care about building safer banking with AI without giving up control, visibility, or trust

If you liked this episode, be sure to subscribe and review the podcast on iTunes, Spotify, YouTube, or wherever you listen to podcasts.

Episode notes & key takeaways

Before we double click on the notes, I just want to say that my marketing team told me I need to structure these notes a certain way in order for people to find my podcast. The below is a bit of that.

AI transaction monitoring starts with intent

One of the biggest themes in this conversation is that AI transaction monitoring cannot just focus on the transaction itself.

It has to focus on intent.

That was really the thread running through this entire discussion. Tyllen kept coming back to the idea that traditional banking systems usually see the final action, but not always the reason behind it. And that changes when agentic banking enters the picture.

If a customer asks an AI agent to help them save for something, move money, or set up a recurring action, the monitoring opportunity becomes much richer. Institutions are no longer limited to seeing the final transfer or payment event. They can start to understand why it happened, what was requested, and how the AI interpreted that request.

That matters because AI transaction monitoring becomes much stronger when it includes context, not just activity.

Agentic banking changes the monitoring conversation

Another thing that stood out to me is that agentic banking is not just another chatbot layer inside digital banking AI.

It is a shift toward action.

And once AI agents in banking start taking action on behalf of customers, whether that is setting up savings behavior, triggering reminders, or influencing payment flows, monitoring has to move with them.

That does not mean banks need to throw away their current programs. In fact, one of the strongest points in this episode is that much of the existing banking risk management framework still matters. Existing fraud controls still matter. Existing compliance programs still matter.

What changes is the depth of information available.

That is where AI governance in banking starts to feel much more practical. You are not replacing governance. You are giving it more context to work with.

Trust infrastructure matters more than hype

We also spent a lot of time talking about trust.

And I think that matters because too many conversations around AI in banking still get framed like a product demo instead of a risk reality.

Tyllen made a strong point that trust already exists at banks. Customers may be far more willing to engage with AI-powered banking through their financial institution than through a general-purpose tool. But that trust only holds if the institution can control the experience.

That is where trust infrastructure in banking becomes essential.

Not vague trust.
Not assumed trust.
Actual control.

Can the bank verify the action?
Can the bank separate user activity from agent activity?
Can the bank see what was asked, what was allowed, and what was executed?
Can the bank stop bad behavior without breaking the whole experience?

That is the real work.
And it is exactly why AI transaction monitoring matters.

Fraud prevention gets stronger when context gets better

One of my favorite parts of this episode was the discussion around auditing the thinking.

Because if you work in fraud detection in banking, you already know how often teams end up asking the same question after something slips through.

What were they thinking?

What this episode suggests is that AI fraud prevention may actually give us a better way to answer that question.

If an AI agent is involved in the flow, and if the system is built well, then institutions may be able to review the logic behind the action in ways they simply cannot with traditional human-led interactions. That includes understanding prompts, decisions, steps taken, and whether a transaction aligned with policy and customer input.

That does not eliminate fraud risk.
But it does create a better foundation for banking fraud prevention and real-time fraud monitoring.

And in this environment, more context is not a nice-to-have.
It is a control advantage.

Compliance does not disappear, it adapts

Lisa added a perspective here that I really appreciated because she kept grounding the conversation in operational reality.

She made the point that this is not an entirely new compliance universe. It is a new channel, a new interaction layer, and a new type of data environment. And that framing is important.

Banking compliance and AI should not be treated like two separate worlds.

The real question is how institutions adapt existing processes to support agentic payments, new monitoring inputs, and new forms of authorization data. That includes how teams think about anomaly detection, user sessions versus agent sessions, and what kinds of records need to be stored when an AI system influences a transaction.

That is why AI governance in banking has to be practical.
Not theoretical.
Practical.

Disputes and authorization will define the next phase

Even in this first part of the conversation, you can feel where this is heading.

If an AI agent helps initiate or shape a transaction, then AI payment authorization and banking dispute resolution are going to become much more important.

Because the question is no longer just whether a customer clicked a button.

The question becomes:
What did they instruct?
What did the AI do?
What controls were in place?
And does the final action actually match the original intent?

That is where AI transaction monitoring for agentic banking risk becomes such a valuable framing. Monitoring is not just about fraud scoring after the fact. It is about creating the visibility institutions will need when disputes happen, when customers push back, or when teams need to prove what happened and why.

The future is not less monitoring, it is smarter monitoring

One thing I kept coming back to while recording this episode is that the future of financial services AI is not going to reward institutions that move fastest without structure.

It is going to reward institutions that move intentionally.

The banks that win here will not be the ones that simply deploy AI agents in banking because the market says they should.

They will be the ones that can pair innovation with AI transaction monitoring, payment risk monitoring, banking compliance, and governance strong enough to support the experience.

That is the real opportunity in front of us.

Not just building something new.Building something trustworthy.

Episode transcript
Hailey Windham
Hailey Windham
00:02
Woohoo, I'm just gonna leave that woohoo in for a minute. Nothing like technical qualities issues. Okay, ready? All right, perfect. So, Thailand, to start us off, let's level set, right? When people hear the term agentic banking, what does that actually mean?
Tyllen
Tyllen Bicakcic
00:23
Yeah, great question. For us, it means moving from AI that just answers questions at banks to one that helps complete what we call intents and takes action on behalf of customers. So you can think of not just analyzing your financial statements, helping you with budgeting, but actually going forth and making payments on behalf of users, handling transactions, being almost like this more advanced PFM and doing a lot of the banking that you would normally see with online banking or at your bank, but within an AI. Agentic experience. And that's what we believe this new wave of banking will look like with AI.
Hailey Windham
Hailey Windham
01:02
I really like that perspective and I like that it will, you know, it has the opportunity to maybe learn from you, but where it's, you know, I can't tell you how many times I'm like, my gosh, I have to log into two different banks in order to send money or I have to, if I want to send money to my kid, you know, even that's kind of a struggle where I'd love for it to, you know, automatically do something for me that even just a simple prompt can get it to do what I need it to do each time and yeah. I just love that idea.
Tyllen
Tyllen Bicakcic
01:33
Yeah, it's like every bank should have a chat GPT like experience inside. And then how do you do that safely? How do you make it take, you know, be more valuable than just asking charging PT, you know, certain questions. You that's what you build with agentic banking and doing it is like very tricky. It requires a lot of like complexities around what we're talking about today with fraud and all of this fun stuff. So yeah, it as I think all of us can kind of imagine, we will probably be prompting our banks to do a lot of the things you just mentioned.
Hailey Windham
Hailey Windham
02:05
100 % and I love you know thinking the difference between like What is what is actually the difference between the AI agent and from the AI tools that banks are already using? Today because we do have some chat functions right in in our banking but sometimes that's going to an actual person or maybe it is just You know, it's kind of filtering you into a queue based on whatever your your questions are in that chat So, you know, what is the difference between? what you guys are bringing to the table versus what banks are already using now.
Tyllen
Tyllen Bicakcic
02:39
I think it's kind of... The easiest way to frame this is it's similar to like when online banking first came about and you had your total online banking providers that would reimagine what the banking experience would look like on mobile and on web. But then you still had your customer support providers as well that would still come in and provide a new version of what support might look like online. And that's where you start getting the differentiation between the Q2s, the Alchemy's and then maybe like the Five Nines of the world and the ones that handle more of that customer experience. That's the same thing we're seeing with AI agents playing out in banking. You're always going to have AI agents that will be able to handle more and more support requests and to be able to help on the customer side of things. What does it take to actually build an AI agent that can do more of that banking experience and move from just being that sometimes frustrating little box in the right corner that you really don't want to interact with to moving that and saying like, we're going to be the entire front-facing experience now. where things will be heading and who will be the technology provider or someone that can make this safe, reliable and trusted. And there's a lot that goes into that. So that's how we think about it here at
Hailey Windham
Hailey Windham
03:57
Love that and I can't wait to unpack that more with you and Lisa. I know that Lisa's gonna bring an interesting perspective to this conversation as well. But I am all for real world examples. I love making things practical whenever we think about them. So I'd love to know what types of tasks could AI agents realistically handle inside financial services? So when you think about agentic banking, what's the simplest? real world example like a community banker or credit union practitioner understand that this is an example of what it could do.
Tyllen
Tyllen Bicakcic
04:36
Yeah, great question. Again, we on those levels, like it's helpful to think of use cases as intense, like your customers come to your bank with an intent. And oftentimes all the bank sees is the final transaction that happened online, but they don't know the why of why a customer is coming to do something in the first place. I can rattle off hundreds of ones that we've seen even with our banks in production right now. But just like a very fun one is imagine your college student and you want to save for a Taylor Swift concert. How does your bank like, is this even going to be a question you're going to go to your banker and ask? Maybe not, but you will call the, you know, automatic transfer savings API to move money from checkings to savings. But a real life use case is knowing what the intent is behind why this person is even doing that. So if I want to go to my community or regional bank and I'm a college student and I start trusting and start saying like, Hey, I want to save money for a Taylor Swift ticket. It's going to be $600 and it's four months out. Can you help me do this? In which case the AI looks at all the previous transactions, helps recommend where the student can cut down on some costs and then also prompts them to say, do you want me to set up some automatic savings for you so that we can help you reach this goal? those are things that traditionally you don't see in online banking. You might not even traditionally see this in the branch anymore, but you can bring these experiences back. And that's just one way of moving from, how can this AI help me to the AI going out and actually taking the actions to do that. I mean, we can go into many of them, but this one is always like the fun one and just was a very interesting one to see. But this is the kind of information you get when you can get this kind of insights brought to your bank.
Hailey Windham
Hailey Windham
06:24
I like that and I really like the fact that we're gonna speak truths on this podcast and you you maybe won't go into your branch and say, I really wanna save for these Taylor Swift tickets because if you're like me, you might be afraid that your granddaddy knows the president of the bank, you know, CEO and they're gonna say, are you sure that's where you wanna put your money and. You know, yeah, I do. But maybe it is an easier way to kind of cross that threshold of and then once we learn, maybe we start building good habits. But I can definitely resonate with the maybe I want to go a different route of asking instead of a person. I'll ask the AI, hey, how can I set this up to automatically do it? You take it out that way. I don't see it and I don't spend it. Yeah, but I love that. I love that perspective.
Tyllen
Tyllen Bicakcic
07:19
Yeah. And it's like the, even this, like, I think you'll be more comfortable asking an AI certain things than you will a human. And I think generally what helps unlock more of the things you're comfortable about asking is what can this AI actually do? I can probably tell you like within coding, just cause my background is also as a developer. More and more junior devs have probably felt more comfortable asking a cloud code or a cursor or whatever AI agent coding tool there is questions that they wouldn't feel comfortable asking a senior developer because they don't want to feel judged. And I can tell you even like We, since we speak honesty, it's like even with my own, like I know my wife would much prefer asking an AI agent sometimes about financials than coming to me and being like, Hey, what, you know, what should I do? And if there's something there that can guide her and is like more accessible, then that makes the whole experience better. think this is a real opportunity for the first time to bring what traditionally community and regional banks have had, which is that trust in that relationship and bring it back. but without having to always be at the branch because we know that that's starting to like not be as prevalent. But how do you then meet people where they are? And this is a great opportunity to do that.
Hailey Windham
Hailey Windham
08:39
For sure, banking the people of tomorrow, right? We can't just rely on the old ways and that's how we've always done it because you're not always banking the ones who've always done it that way. You're banking their children or their grandchildren. So I think that that is just a phenomenal way to look at it. And Lisa, I'd love to ask your perspective first on everything Thailand's just mentioned on agentic banking, coming from your experience, is there anything that that makes you either really excited or really hesitant about this way of thinking.
lisa
Lisa Durnford
09:17
Definitely excited about sort of any movement forward in the context of how financial services are provided and accessed by consumers, especially coming from Canada where I am. for many years. We sit far behind a lot of these innovations, even kind of basic open banking frameworks are kind of still, you know, definitely a work in progress. And so I'm always a fan and always supportive sort of moving forward and pushing for better accessibility in financial services. And I think what's interesting about how this is positioned is it gives the banks or the financial institutions, the fintech an opportunity to almost curate the experience in a bit more of a structured way and maybe help consumers and businesses access the services in a way that's going to be more efficient and more effective. But in terms of nervousness, sure, there's always a lot of questions to ask and we wanna make sure that especially when it comes to moving money and holding money for consumers, we're doing it responsibly and not inviting consumers to use a tool that might expose them to additional risk. and I know we'll get into a lot of the details there later on.
Hailey Windham
Hailey Windham
10:38
Yeah, I am really excited for that segment within this episode. It's going to be really cool. And that's all we're going to do to tease that up. But Tylin, you also mentioned, you know, that that trust aspect. And so the trust engine, you know, infrastructure for safer banking, because fraud prevention isn't the product and trust infrastructure though is. traditionally fraud. tools, right? Look at activity after it happens. That's what we've seen. It's all about the detection or the afterthought instead of that preventative model. But how does agentic banking shift that model, do you think?
Tyllen
Tyllen Bicakcic
11:19
That's a phenomenal question. There's a lot of ways. We've been doing this for two years now, trying to figure out how do you trust AI with money? And we've done many iterations of payment to get to this point where we're working with banks to do that. I think just baseline is trust already exists at banks. That's number one. If a banker is going to deploy a technology to their consumers, Those consumers will probably trust an AI experience more at their bank than they would go into chat GPT and doing it. I think for a lot of consumers, their first experience with AI might be at a bank with this, because I know many people, I live in a small town here in Durango, Colorado. And if you ask people about chat GPT or anything with AI, they are very prideful that they try to avoid it. Right? Like that is, that's just how a lot of the world is too. And if your bank is to deploy something, that's number one with trust. The second thing is we talk a lot about being more reactive with fraud, right? Like this transaction just happened. What caused this transaction to happen? You do your banking through an AI, the chain of thought or the reasoning for why this transaction even happened is all there in natural language. And if you design the systems very well, which we have, you are able to see how the agent took the steps to actually go and do this. You can't see that level of thought in human. we know the human's thinking, but you can't actually audit it the way you can an AI agent. You can't audit the conversation between maybe a banker and a teller or You know, let's say there's a fraudster that's trying to coach up someone to ask questions or to do things on their banks, at their bank. You could actually see more of how that questioning is happening through an AI agent. So, I mean, my whole thing is I just, maybe I know I'm biased, but I also feel this is very correct in which like, I believe this makes it a lot safer. If you just lead with having an intelligence layer on top of already all of the
Tyllen
Tyllen Bicakcic
13:33
reactive fraud prevention that we do. mean, we also, you know, speaking about partners and Sardinia, we use Sardinia in our AI agent to be able to check as a new person is adding a payee, is this person on the sanction list? This is things that banks don't even, you know, a lot of banks are still bringing on Sardinia, but with these kinds of experiences built well with AI, you can get this level of fraud prevention. just built into an AI agent experience. There's, we have 10 layers of what we call fraud detection and screening and everything just within the AI side, let alone their existing side. you can, you know, we, we talk about fighting fraud, get as much, a lot of it is getting as much information as you can. And this helps you collect a lot of information.
Hailey Windham
Hailey Windham
14:23
I truly, I see it, I get it. And first I made a note to say, I love how you were talking about auditing the thinking. There've been numerous times that like a transaction has gone through and whether it was frontline that allowed it or maybe it was a system, know, audibly we've all said as fraud fighters out loud, what were they thinking? Like now we can actually go back and see the thinking. And I think that that's great. You know, so why did we allow this to go through? How did we rationalize it? What was the intelligence behind that? And I think that that just really provides better perspective from the risk side of things, especially for me as a fraud fighter, I would want to know how can I trust it? And being able to see that thought process really does help provide that. But you also started to touch on some of the signals or data sources. So I'd love to kind of double click on that. what signals or data sources need to be unified to make safe decisions in real time.
Tyllen
Tyllen Bicakcic
15:26
Ooh, mean, they're, your bank will always have existing signals that they look for, right? I think those, if we're talking strictly from an AI agent perspective, those will. You need to be able to orchestrate the AI well to where when it hits those API, that it hits those APIs reliably, none of your existing bank signals will change. They're still going to pop up where they will. They're still going to pop up exactly, you know, in the software that they have. If it comes from an AI's perspective, an AI acting on behalf of a user. I think that might be one of the key things to know that it was like this AI that triggered this instead of this human that was trying to do these things. In terms of, you know, if someone is maybe adding a payee that might not need to, you know, that the bank accounts might not match. which we do with Sardinia as well, which is the whole, I believe it's a lot of smarter banking people than me know how this all works. But those are the things of like, where do you surface this back? You know, the thing I think compliance and audit people have so many different screens that they look at all the time on where these signals exist. We have our own with Payman. I'm sure they have their own elsewhere. Where do you bring all of these things into one place is more to me in. the design of how you build the AI agent to work with the existing APIs. it's a very good question because it hits on two things. One, you don't have to change much of how you're doing your existing compliance and auditing. That's like super important. And then the second is if this thing is bringing me different signals, how do I surface this and where do I surface this?
Tyllen
Tyllen Bicakcic
17:19
I would say that's still being played out, but at the end of the day, you're going to get them on some technology platform that gives you those answers.
Hailey Windham
Hailey Windham
17:32
Lisa, any thoughts here?
lisa
Lisa Durnford
17:35
I think, yeah, I think we'll... We'll frame it, I think, from a regulatory compliance perspective in sort of the governance discussion. But I like the push for it's not a net new way of managing risk and it's not a net new process entirely, which would certainly feel daunting and feel potentially prohibitive. It's really just adapting existing processes to new channels, maybe just new sources like Thailand was flagging. So we can pull a lot of insight from kind of precedents that we have already in place. Like I know we were talking about as a group before, this isn't the first time that the financial services industry has had to adapt to innovation, has had to adapt to new ways of submitting payment instructions, ways of actually moving money kind of in a practical sense. So this is another maybe iteration of that, another way of a consumer or a business moving money, managing their money, accessing their products. And so it just invites us to look at how to adapt existing programs, existing processes and controls to ensure we're looking at the right data signals. And the most obvious one, maybe as just a quick example, is of course any institution will have a monitoring program from both fraud and AML perspective, monitoring transactions and behaviors for risk. Typically, and Haley, of course, you would know this, we talk about this all the time, that bot detection is usually framed from a fraud perspective as looking for potentially, know, fraudulent or bad activity. And now we're all thinking about that as like, okay, well, that remains true in some scenarios, but now we're also intentionally asking agents or potentially, you know, bots to perform instructions on our behalf with good intent. So how do we distinguish these things and how do we adapt maybe our anomaly detection?
lisa
Lisa Durnford
19:37
To understand that context, which is certainly tricky, but teams like Thailand's are trying to help institutions think through this and understand what data signals there are.
Hailey Windham
Hailey Windham
19:49
I couldn't agree more. Think that this really does. This conversation is going to position one that if maybe we were afraid of that AI in banking, right, that this is now going to help prove that this is maybe safer, especially if it's designed properly. If we bring in the things that we need to, if we help that anomaly detection adapt to this new data or this, you know, not net new data, but just the data sources that we should have been bringing in all along. I think that it's really going to help prove that point. So I definitely appreciate your perspective and love that it really tees up the next part of the conversation, which we're all really excited for. But it's. Know, innovation in banking is accelerating rapidly, but governance frameworks often lag behind. We know this, we all know this, we preach about this and probably go to our therapist about this. But if AI is going to influence or initiate transactions, institutions must answer key accountability questions. So for this next section, we're gonna do something a little bit different on the podcast. And Lisa is going to play the role of a CRO in a bank that maybe isn't quite familiar with this topic or this area and wants to ask the questions, not to put up the guardrails and the stop sign and tell them to, no, Uno, reverse your way out of here. Instead, it's really to enable this solution with intention. So Lisa, I'd love to go ahead and let you take the reins on this part of the conversation.
lisa
Lisa Durnford
21:24
Sure, yeah, thanks Haley. No, I'm always... a little overly excited maybe when it comes to the idea of risk assessments and enterprise risk. It's an area I've worked in for a while in different contexts. So definitely, you know, when thinking through this, this topic, my instinct is to frame it, hopefully not in a boring way from a risk assessment lens. And so if, if I were the CRO of a bank kind of thinking through this, maybe I'm feeling like, all right, it's coming for me. What do I need to do on this Monday morning to to kind of get going and get my team ready. And so where would I think about it? I would start from the core of any compliance program, which is that risk assessment, understanding where the risks are. can't design controls. You can't really spin up a compliance program without actually understanding the risks. And maybe some compliance teams are feeling like they don't have necessarily the expertise in-house to kind of appropriately identify all of the different areas that they might want to be thinking about. That's where think partnerships with companies like Thailand's are key. And we'll talk about third party due diligence and third party risks separately, because obviously that comes into play. But a couple of different angles before I dive into just some examples of questions to think about. First, I might ask, where might the financial institution be exposed to risks of agentic banking or agentic payments already, maybe without explicit opt in? something that's being hyped up, being talked about a lot. Card networks, especially, are designing ways for users to interact with these services. It doesn't necessarily need to be coming from the bank directly. So banks, especially banks with issuance and kind of been sponsor programs, should probably be asking where these risks might be presenting without having kind of proactively opted into it, and that's fine, but kind of thinking through how to help your customers navigate.
lisa
Lisa Durnford
23:27
That and do that in a responsible way. And then second and maybe the more kind of long-term project would be thinking about what your business strategy is for intentionally adopting and enabling these products for your customers. And the best way to leverage new innovations is to design your risk strategy to work with these new processes and not think about your risk strategy as a way to prevent or kind of mitigate these new processes. Like we're looking to kind of jump in and adapt and enable this in a responsible way with intention. And so as a couple examples, and I'm not gonna run through a full enterprise risk assessment here, that would keep us here for a week and be dreadfully boring for everyone. But a couple key questions. that I'll pull out and then, know, tile in anything that jumps out to you would love your thoughts as someone who's probably dug through these questions yourselves and with clients. From an operational risk side, how do our teams and not just compliance teams, I mean, I might focus there, but how do our teams need to adapt to these new products, these new services, these new way of interacting with consumers and business customers? From a more technical side, a specific example might be how do we handle that circuit breaker feature of potentially identifying maybe a runaway agent, as I've seen it referenced. But for example, if an agent attempts a transaction 1,000 times, whether it was supposed to or not, what happens in your system? For example, does that create 1,000 alerts for someone to run through, probably another agent, but for someone to run through and manage, or will there be of an error management issue management process in there to kind of break that process.
lisa
Lisa Durnford
25:18
Tylan, I think you mentioned this earlier, but sort of user access from a systems perspective, how do we differentiate, we as the bank, how do we differentiate between a user session and an agent session and why does that matter? So how we interpret that data, how we log that as a specific type of session, where does that matter and where does that have downstream impacts on how we manage that interaction depending on what. The instructions are what the actions are that are taken. For example, if maybe an agent interaction is flagged as risky for any particular reason, are we able to identify that and maybe block the agent's access without also blocking the human's access? Are those distinct or are those potentially problematically intertwined? From a regulatory risk perspective, the big topic, of course, is dispute resolution. And we'll dig into that a little bit more. as well. But how do our systems receive and store these authorizations, these mandates, context, I was talking about the context, I also love the audit, the thinking. But how would I do that without having that in my system in some way to be able to go through that as a record of the transaction? So how does our system adapt to pull in that relevant information? And then, you know, thinking of it from Sardinian's perspective, how would a bank's risk system leverage that data as well and potentially build some controls or monitoring around the extra authorization records or data that comes through with an agent initiated transaction. A fun one, a fun one. It's a weird way of framing it maybe. Liquidity risk. And so from a couple different angles, I think it's really important, you know, I think these risks can be... entirely mitigated by kind of the right partnership, the right implementation here, but how should a bank think about the risk of an agent initiating transactions so rapidly that it challenges kind of the clearing and kind of the available balance of an account? How is that managed? How does an agent have access to kind of those funds? How do you ensure that there isn't double spending that is possible by using an agent kind of in rapid succession?
lisa
Lisa Durnford
27:37
Because we know that true settlement, unless we're looking at stablecoin rails, is going to lag far behind kind of an agent's ability to initiate subsequent transactions. And then I think from a more systemic level, which is a bit out of scope maybe, but I like to think about, and Tylan I think you talked about this actually a bit in a different talk that I was listening to last year. but the push from some providers to pull funds out of banks and hold them in a distinct wallet for access to agents. If there was really a high volume of adoption for those types of agent services, that is a lot of deposits being pulled out of a bank account and being pulled into these wallets that are managed in a sense by agents. And that added an extreme kind of volume that had. deposits and how they might think about liquidity and capital management. And then I think I'll jump to, there's so many more, but I think I'd jump to third-party risk because I think that's a really key one when it comes to enabling these types of providers and partnerships. And I would want to spend a lot of time with third-party risk teams, vendor management teams, to come up with probably a very different maybe questionnaire for these types of partners and what questions should we asking in terms of how I might interact or in partner with a company like Paymin and this is where I maybe Tile and I'd love for you to jump in and what kinds of questions do you want clients to ask you and you know how do you think about building that trust and ensuring that you know they're managing your partnership responsibly and that they kind of understand you know the the details as much as they need to of you know sort of how you work together and enable this product for them.
Tyllen
Tyllen Bicakcic
29:30
I mean, if we had hours, I could hit on all of these.
lisa
Lisa Durnford
29:33
It's a lot.
Tyllen
Tyllen Bicakcic
29:40
One of the things that I think a lot of this is like on the vendor to help guide a lot of the bank. It's too hard and it's moving too fast to know the answers to all of these. But as a vendor, you need to know exactly like what you just did, Lisa, how the bank is thinking about this and to ask the questions that like you asked, like, what do I wish the banks would ask me? Like there's so many things, but how can you know with how fast AI is moving? One of the things we hit on was like, what is the difference between an internal
lisa
Lisa Durnford
29:51
Yeah.
Tyllen
Tyllen Bicakcic
30:10
AI agent versus an external AI agent. If I'm a bank now and I'm hearing that these AI agents are without my knowledge using people's credit cards to go and buy things, how does that set me up as a bank? There is a reason why today it's very hard to expose your bank's APIs to something like Claude. One of the questions is that if I was that... If the bank could ask like, why couldn't I just give Claude or open AI my APIs to go and do all of these things? Well, people have tried and it's very hard one to set this up within Claude or open AI without like just all the audit and risk that comes into into play when it's like. Someone is interacting with Claude, who is a third party vendor who might now have read and write access to APIs to go and do this. Who's at fault if Claude messes up? Are you going to go and sue Anthropic? Like how does this whole relationship play out? So this is where it becomes very hard to just totally expose your bank's APIs to third party like this. When it comes to internal, this is where being with the, like answering all of these questions of, you know, Just baseline, you have a tight MSA or legal contract that understands things that the bank doesn't even know to ask on like with AI, this attacking like, is this a high critical technology vendor? Is this more on the low critical side? Do you have these kill switches? Like you mentioned, do you have all the ability to shut off the user if they're behaving badly, but not ruin the AI experience for everyone else? this has been like, As you all had mentioned, the technology side can move pretty fast. The models are moving fast. There's so much on the legal side too that we've just had to also see and do to build out this structure where banks can feel comfortable at the end. If banks are where trust exists, you cannot ruin that trust. And you truly have to believe that the product you're delivering is going to make the bank safer. Not just it like...
Tyllen
Tyllen Bicakcic
32:19
We could not be doing Payman if we didn't believe that this technology would improve their overall security experience. And I think that's what as a good vendor you do. You don't come in and say, hey, this is something wild. This is cool. How about you guys go and try this? Have seven, like our VPs of innovations that are using this, that might be of a different generation than the customers we think might use this. They feel so safe using these things and they keep coming back to the app. We talk about like, what will someone... use this for, we had one of our clients send our head of product the coffee. He just sent money because she was like, Hey, send a cash $7 and 50 cents for coffee in New York. My immediate thought was, geez, coffee is expensive in New York. this is your, if you can make the experience feel comfortable, the risk person can feel comfortable. If people who've been at the bank can use your product, can go through that period of three, four months and be like, this is like. They start asking questions of like, if you can get to the point where they start asking questions of not, can't, like, why is it not doing this, but why can't it do this? If you want to give it more and more and more, then you've got a winning product. And then I feel like you can feel very comfortable in the, not everyone can do it. It's very hard. can't imagine what, you can't just have another fraud startup that just popped up and banks will like trust them out of nowhere to be like, this is it. You also have to have the background and knowledge. to be able to do the same kind of the credibility.
lisa
Lisa Durnford
33:47
Yeah, absolutely. One thing you said there reminded me of we, in the open banking conversation at a very high level, we talk about read and write access and kind of the different approaches to giving systems access to, let's bank accounts, example, bank account data. Do you find and maybe what's your opinion on kind of phasing in the use of kind of an internal agent for banking services? is there an interest and would you say there's a benefit in terms of that kind of building trust by using in a sense by starting with maybe read access only and doing sort of really intelligent analytics versus jumping in to write access where the agents are actually able to make changes to and sort of move money on behalf of the user. Is that even worth the conversation or are you seeing we're at a stage where we're kind of diving right in and we're comfortable with read and write access and sort of combining the full service here?
Tyllen
Tyllen Bicakcic
34:55
I used to think so. used to think that was the approach maybe before, but it's like, know from our side, the technology is there to do the right. And that's where the value comes. And I remember like, there's always this conversation of like, you know, very heavy payments nerds, which I've been also for a long time now will say like,
lisa
Lisa Durnford
35:05
Yeah.
Tyllen
Tyllen Bicakcic
35:21
This is a very deterministic system. AI is very probabilistic. You cannot get these two to communicate the same. And my question back would be like, do you truly think it's impossible? And I can tell you, no one will say it's impossible. So then the next step goes to, then it must be very hard. And if it's very hard, then if you do it very right, you have a very valuable product. So if we were just to stop at, it's not impossible. It's hard. Can you do the hard things? We have done the hard thing. And now how do you make people feel comfortable to just trust this? And the way you do it is you literally try to have them break it every which way possible. And what they find is they can't break it. We are running tests on our own product every single day. We have humans and AI's running tests to make sure can this right function be broken? And it's really, once you put it in the mindset of it's not. actually at all different from a human accessing an account at a branch on behalf of a person. And then you realize all the benefits you get from an AI doing these things versus a person. And then you also can help the person get better at accessing things once you see how the AI is doing it. And, you know, one of the things on the like risk and governance side is as a vendor like us showing people, Hey, here are the policies. You know, it's not just like. Woohoo, this agent can do this. No, but you have control. A lot of what, we've even sometimes shifted the framing from trust. It's like, trust is okay, but can the bank control this is the question. Control is where a lot of the trust comes. And it's like, yes, here are all the ways you can control this. Here's an OTP request that's tied back to your bank whenever money's being moved. Anytime that happens, your customer actually has to verify this transaction before it goes through. You mentioned loops. Our AI agents aren't allowed to do loops. These are the different things that help prevent some of those like thousand plus transactions going through. If a payee wants to get added, do you, like we also require an OTP for the payee because you don't know if the AI might've hallucinated someone to add. Not only do we require the OTP, we call Sardine to make sure that this AI agent isn't paying the wrong person and that the bank accounts match because we've been in this for two years and because we've seen not how AI's.
Tyllen
Tyllen Bicakcic
37:45
Try to fraud the system, how humans try to fraud the system with AIs. So I don't think the AI tech is actually the bottleneck. It's more preventing bad behavior from humans interacting with the AI.
lisa
Lisa Durnford
37:59
Absolutely, that reminds me of, and I know you worked in the crypto industry for years, but people would love to frame crypto as risky as if it was acting on its own. Ultimately, it will always be the human that is attempting to exploit the system that we have to kind of stay ahead of. So yeah, completely agree there.
Tyllen
Tyllen Bicakcic
38:21
Yeah. And this is, there's so much, yeah, there's so much in this space that is like, this tech is not something crazy. And like the other part is it's built already. If I can like hit on what it's built already on all the tech you already have, like AI doesn't come and totally augment every other experience that you have. I don't know, like, cause I am also nerdy on this side, like open clause big, if y'all have heard of open claw, like it's big value is that it could connect to all of your systems and bring them into one place and it could start interacting with them and orchestrating. Well, all of the policies that exist on those systems are all there. Now it's like, how do I know that this AI agent is interacting? It becomes very hard with the. like open cloth, it becomes much, much easier as a bank if it's built internally to you first. So instead of thinking versus read versus write, I would think what can I build internally to understand? And then how do we bring this external? Like I know many OLBs that have tried to expose their MCP or their APIs as an MCP to a third party and they've only started with read access that fails because it's hard technically for a consumer to do this. second, Why are you bringing consumers away from your bank? I think the biggest piece of retention is how often do people interact with your digital banking provider. If you can increase that, don't send them away. Bring that information back to you and keep improving the experience there.
Host
Hailey Windham
Hailey Windham
Fraud Forward, Sardine

Guests

Tyllen
Tyllen Bicakcic
Co-Founder at Payman
lisa
Lisa Durnford
Head of AML Compliance