FRAUDFORWARD

Agentic banking and the future of AI-driven financial control

35 min

What’s up fraud fighters, and welcome to Fraud Forward!

This episode picks right back up in one of the most important places this conversation could go: what happens after we move past the shiny AI headlines and start talking about control, governance, disputes, and real accountability inside financial services.

Because that is the real conversation around agentic banking.

Not just whether AI in banking is possible.

Not just whether customers will use it.

But whether banks are ready to govern it responsibly when AI agents in banking start influencing payments, authorization, disputes, and customer trust.

What I really appreciated about this part of the conversation is that we did not stay at the hype layer. We got into what agentic payments actually require if institutions want to use them safely. We talked about AI governance in banking, AI transaction monitoring, banking compliance and AI, and what it means to preserve trust when AI banking transactions start happening closer to the customer.

And that matters.

Because if agentic banking is going to become part of the future of financial services AI, then we have to build it with intention. We have to build it with controls. And we have to make sure trust infrastructure in banking grows with the technology instead of getting left behind by it.

What you’ll hear in this episode

  • How agentic banking can solve real customer friction in ways traditional digital banking tools often cannot
  • Why AI agents in banking create new questions around governance, accountability, and control
  • How AI payment authorization, OTP checks, and layered safeguards can help make AI banking transactions safer
  • Why banking compliance and AI need stronger context sharing, better auditability, and clearer ownership
  • How AI transaction monitoring may need to evolve from reviewing transactions alone to reviewing prompts, instructions, and agent-led behavior
  • Why banking dispute resolution becomes more complex when a customer authorizes an AI agent instead of initiating every action directly
  • How community bank innovation and regional bank innovation can benefit from agentic banking without giving up control

You should listen to this episode if

  • You work in fraud, compliance, payments, risk, digital banking, or bank operations
  • You are trying to understand how agentic banking may affect governance and customer trust
  • You are evaluating AI in banking beyond chatbot functionality
  • You care about banking fraud prevention, AI fraud prevention, and safer banking with AI
  • You want a more practical conversation about how AI-powered banking actually works inside real institutions
  • You are asking how banks can modernize customer experience without creating new gaps in control

If you liked this episode, be sure to subscribe and review the podcast on iTunes, Spotify, YouTube, or wherever you listen to podcasts.

Episode notes & key takeaways

Before we double click on the notes, I just want to say that my marketing team told me I need to structure these notes a certain way in order for people to find my podcast. The below is a bit of that.

Agentic banking gets real when it solves real customer problems.

One thing I wanted to do in this conversation was make agentic banking feel practical.

Not theoretical.

Not futuristic just for the sake of sounding innovative.

Practical.

Because that is where a lot of conversations around AI in banking start to lose people. They stay too abstract. They talk about AI-powered banking like it is just a concept, when what people really want to know is simple: what would this actually do for me?

That is why I shared my own example around paying my cousin for helping with my kids. It is inconsistent. It is easy to forget. It does not fit neatly into a standard recurring payment workflow. And that is exactly why agentic banking matters. It creates room for a more adaptive customer experience. One that responds to intent instead of forcing every need into a fixed product feature.

That is a real shift in digital banking AI.

Governance has to move with the transaction

Another theme we kept coming back to is that AI governance in banking cannot sit off to the side.

It has to move with the transaction.

Tyllen made a point that I think is really important here. Banks do not need to throw out their entire governance framework just because AI agents in banking are entering the picture. The core banking provider still matters. Existing policies still matter. Fraud controls still matter. But now institutions also need better context around why a transaction happened, what the agent was asked to do, and whether the action matched the bank’s rules.

That is where this gets interesting.

Because if agentic payments are going to scale, then banks need more than a result. They need visibility into the logic behind it. That is what makes AI banking transactions more governable, more defensible, and honestly more trustworthy.

Trust comes from control, not from hype

This was one of the strongest takeaways for me.

Trust does not come from saying the technology is smart.

Trust comes from control.

That means OTP verification. That means layered screening. That means keeping a clear record of who initiated what, which session was user-led versus agent-led, and what safeguards were triggered before money moved. That also means knowing when the bank can step in, shut something down, or separate an AI agent’s activity from the customer’s direct access.

If we want safer banking with AI, we cannot treat trust like a branding exercise. It has to be operational. It has to be built into the product, the authorization layer, and the decision trail.

Banking compliance and AI need better context

Lisa brought a really important perspective to this conversation because she kept grounding everything back in the systems compliance teams already understand.

That matters.

Because one of the biggest mistakes people make when they talk about financial services AI is acting like this is a completely new universe. It is not. It is a new interaction layer. A new data structure. A new operational challenge. But the core need is still the same: institutions need enough information to understand what happened, who initiated it, and whether the transaction was authorized and compliant.

That is why context sharing is so important.

If banking compliance and AI are going to work together, institutions need to preserve the record. The prompt. The instruction. The session type. The authorization trail. Without that, fraud teams, risk teams, and dispute teams are being asked to make decisions in the dark.

AI transaction monitoring is going to evolve

This conversation also pushed into a really important future-state question: what does AI transaction monitoring actually look like when the action starts with a prompt?

Because that changes the review process.

It is one thing to review a suspicious transaction after it happens. It is another thing to understand the natural language, intent, and agent behavior that led to it. Tyllen talked about how engineering is already shifting in this direction by reviewing prompt logic, not just final code output. That same mindset may become more important in banking fraud prevention too.

Not because the goal changes.

But because the evidence does.

That means AI fraud prevention may increasingly depend on understanding what the agent was asked to do, how it interpreted the request, and whether the final action matched the original intent.

Disputes are going to get harder before they get easier

We also spent time on banking dispute resolution, and honestly, this is one of the places where agentic banking forces the industry to get very real very fast.

Because if a customer says, “I didn’t authorize that,” but the customer did authorize the AI agent, then what happens next?

That is not a small question.

That is a core operational question.

Lisa did a great job breaking down how dispute resolution may need to evolve when authorization is delegated, delayed, or partially automated. And what stood out to me is that the answer keeps coming back to the same thing: context. If both parties do not have enough information to understand what the customer intended, what the agent executed, and what controls were in place, then the dispute process becomes much harder to resolve fairly.

That is why standardization matters so much here. Not just for efficiency, but for accountability.

Community banks should not sit this out

One thing I do not want community banks or regional institutions to hear in this conversation is that agentic banking is only for the biggest players.

It is not.

In a lot of ways, this may be one of the biggest opportunities for community bank innovation and regional bank innovation if institutions approach it with the right controls in place. Because the real differentiator is not going to be who talks about AI the most. It is going to be who builds the safest, smartest, most trusted experience around it.

And that is still a people business.

That is still a relationship business.

Which means this conversation belongs in community banking just as much as anywhere else.

AI is coming to banking

What stayed with me most in this episode is that agentic banking is not really a conversation about whether AI is coming to banking.

It already is.

The real question is whether banks are going to build it responsibly.

Whether they are going to embed trust directly into the system.

Whether they are going to treat governance, fraud prevention, authorization, and compliance like core design requirements instead of cleanup work after launch.

Because fraud is still human-driven. Pressure is still real. And when money moves, especially in AI banking transactions, the consequences are never just technical. They hit trust. They hit stability. They hit real people.

So no, I do not think this is something institutions should fear.

But I do think it is something they need to take seriously.

Agentic banking can create a smarter, more adaptive experience for customers. It can strengthen banking fraud prevention. It can support better AI transaction monitoring. It can help institutions build safer systems.

Episode transcript
Hailey Windham
Hailey Windham
39:56
I think this has been great. I'm going to kind of jump back in here for a second, Lisa, if you don't mind. I really love the perspective, Tyllen, of having that where you build it for your bank. That it's not that you're going to come in with these, you know, black box rules of this is how it's done. It's really you own it. You build it to the specifications of what your customers are either asking for, what you're willing to give them access to at this time. And I think it's also a way to kind of safely onboard, right? It's like, we're not giving you keys to the kingdom, but we are going to test it. We are going to look to see, does this even work for our customers? Does this work for what solutions we have? If not, then maybe it doesn't make sense. But if we are looking to innovate and to bank our current customers, kids and grandkids, maybe this is the way to move forward. And I was thinking, as you guys were talking, I was like, you know what, I think there is another use case, at least for me. Right now, I have someone that, the external funds transfer process in my current online banking is a nightmare. So I have to go outside to a P2P app and I have to transfer money to my cousin who helps pick up my kids after school during the week. So every week I told her, said, look, I've got a lot going on and I'm gonna forget. Like no matter what, no matter the time, even if I set up a calendar reminder, if I'm in the middle of something work wise, I'm probably gonna forget. Please text or call me and don't ever act like you can't text or call me because you did a service and I need to pay you. And so I get these text messages on Saturdays. Hey, Haley, just a reminder, you forgot to pay me. And I'm like, my gosh. And so I can't set up a daily or every Friday a payment because She gets paid for which day she picks them up. Sometimes it's two days, sometimes it's three days. So I would like to have a notification from my banking app that says, hey, do you want to pay your cousin and how much? And ask me on Fridays. Ask me Friday nights at eight when I'm not working. And that's the one time for sure I can say, I know I'm not working. That would work perfectly. But yeah, so I just wanted to throw that out there. There's a way that agentic banking would work for Haley.
Tyllen
Tyllen Bicakcic
42:17
Yeah, I love that. mean, that's definitely possible right now. it's just, and to add on that point, think about what an OLB would now have to do. Pre-AI world. Now this becomes a product roadmap feature, right? Now we need to see where do we add this on the screen? Now we need to know how to like, you know, fight for resources and do all of this. This is where the shift in design, even with teams will come to where, okay, here's another intent. How do we support this intent? Do we need to connect another API? Do we need the AI to be able to make cron jobs? Very cool. If it does the ability to make cron jobs once there's not just that intent it opens up. It's like, okay, I need to remind me to check how much I should pay my vendors as an SMB. And you don't now need to build, a new SMB experience, all of this. This is where the bank can start to just lean in. into everything. One of our things that we work with our banks are it's like, how do we market this? How do we get people comfortable with it? Literally the marketing is just ask it. Just ask it. Like, you have a question. Do you think it can do that? Just ask it. It'll probably be able to do it or help you. And that's what we did with ChatGPT. Everyone uses it. Maybe the first couple of days, they realize the power, then they lag off a little bit. And then 15 days later, something clicks. and you just keep coming back to it and back to it and you start using it for more and more things. That would be such a beautiful experience for a bank to see for their customers.
Hailey Windham
Hailey Windham
43:46
Totally agree. And I do think though that part of the other way that we were framing this originally, and I'm sorry, Lisa, if I kind of overtook this portion, but I just had another question to ask. But we talked about influence for the financial transaction. So I just wanna ask, especially as we're still talking kind of on that governance side, what governance structure should exist around these AI systems that influence the financial transactions?
Tyllen
Tyllen Bicakcic
44:16
One, as mentioned, all of your existing governance for financial transactions stays the same. You have your core banking provider, you have your policies and risk management there, that doesn't change. You have your online banking provider, that doesn't change. Now for the financial transactions, what you can provide from here is just more data. If it's initiated through an AI, did this, you We know the system requires an OTP check. Boom, there's determinism for you. The transaction cannot go through unless the user checked their email that was sent to them by your bank or through text message. To then plug that information in. Then from our side, this is where you have to build a very strong engine. It's not enough for us just to say, Hey, we're going to take your core banking provider. We're going to take their APIs. We're going to put it in an, in an MCP and we're just going to connect them to a plot model. That's not enough. have to build many architectures of AI agents to be able to make these decisions. To go and do this. And then I need to go back to the bank and I need to be able to show the compliance team. Like here's exactly how the agent thought about it from one. Here's the input that came in to was, this even allowed by the bank? We have our own input sanitizer that uses LLM, that uses RegEx, that uses sardine, everything to make sure this intent can even pass through. And then you see all these layers of decision-making and then you get that from the payment side and it's like, boom, here you go. You can make your decision of if this was good or not, let alone what you also get from your existing core. core provider, from your existing fraud provider, from your existing online banking provider. This is where you truly have to make that financial transaction feel safer, smarter. And one of the things I always say is we've just created the greatest technology ever.
Tyllen
Tyllen Bicakcic
46:09
With outside of fire with AI. It's like, don't be so scared. The technology is ready today. It's more than ready. It was probably ready last year, but today it's even better. Next year it's gonna be even better. And then how do you trust the vendor to be able to help you with these things and provide all these? This is where you truly need to understand banking also and not just be, you You have to have the banking experience. You need to understand what makes bankers feel comfortable because in turn that's going to make them feel comfortable with their customers.
lisa
Lisa Durnford
46:41
Tyllen, you just touched on so many things that... I think hit on different aspects of compliance and dispute management as well. know we want to talk about disputes. When I was first kind of thinking through these concepts of how we interact and how we manage the risks of agentic banking, it's helpful for me at least, and I'm sure some other compliance folks to tie it back to compliance systems we already understand and already use in day-to-day basis. How does this tie in? How does this fit in? all comes back really at the end of the data sharing and understanding that context. So you talked about, you know, how is that context shared? How is the influence or the intent, the instructions, you know, shared as part of that transaction? I think a lot of, you know, compliance minds would think back to the travel rule and how critical that is in terms of managing the risks and managing obligations associated with transactions, especially when those transactions do flag risky or kind of behavior that might require investigation, might require reporting, might require dispute resolution in the future. And without proper context sharing, we're really in the dark when it comes to addressing and solving for all of those things. So the travel rule as it applies to wires or really any payment, I think kind of applies here in the same sense that we are at risk if we don't have enough information. about that transaction, about the person initiating the transaction, the counterparty, all of the parties when it comes to monitoring, when it comes to risk flags, when it comes to reporting suspicious activity, when it comes to resolving disputes, it's really so key. And I think how you framed setting up that system to ensure that all of that data and context is shared is really going to help with that comfort level of feeling like I can manage these risks because I'm getting all the information that I need. That also touches on, I've heard this term many times of know your agent and we kind of talked about it a little bit before this recording, but I think ultimately it comes down to just understanding kind of who you're interacting with and where the instructions are coming from and how do I identify, you know, where the instructions came from and of course that could be solved for by ensuring that that proper kind of data package essentially is shared. with the transaction and identifying an agent led session versus a user led session and just understanding how that works. And I think it's, it is comforting, I suppose, for me at least to feel like, you know, there's layers to how we can implement this, you know, responsibly layers to how we can adapt our compliance processes that we're already familiar with, that we're already operating with. to just manage maybe new data sets, new data fields, and just new ways of interacting with our consumers really at the end of the day.
Tyllen
Tyllen Bicakcic
49:52
This is, there's something that you said that made me think about something that could actually help a lot on the fraud and compliance side, which is today in engineering, you basically assume the AI agent has done the code right. You know, so it, before what auditing looked like for engineers was, Hey, you go code and then let me as an engineer review your code and let me go line by line and make sure that it all looks good. As AI kept getting better and better, you kind of. absolve the need for that. And really what you ask for is, let me look at how you did your prompt. Let me see what you asked the AI agent to do so that I can understand, did you structurally ask it to do the same thing? And this is where you have, there was this new startup that just came out from the former GitHub CEO that started this to basically attach prompt logs to GitHub code reviews.
lisa
Lisa Durnford
50:45
Mm-hmm.
Tyllen
Tyllen Bicakcic
50:48
This is very similar to what you should start seeing in the transaction space, where it's like, assume the agent can definitely do the transaction. What was asked of the AI agent to go and do these things so that you can manage that shifts a lot of how you audit things. You're starting to audit natural language. You're starting to audit conversations. And then you want to see, does this fit the mold of what's a reliant, what's a request that should pass through? Yeah.
lisa
Lisa Durnford
50:52
Yep. (51:15) Yeah, exactly. And how does a tool like Sardine, how do banks risk tools adapt to monitoring instructions, prompts, natural language? It's a new same goal, but certainly a new way of monitoring for different structures of data, in a sense, to give you that proper context. Yeah, no, it's interesting.
Tyllen
Tyllen Bicakcic
51:38
This is why I would push start internally with it, because even with external, internally we have our engineers have cursor, they have cloud code. have our own open cloud AI agent instances running that could do engineering. If I was to just open this up externally, like I can't really audit even if they do it on their own cloud. and figure out what's going on. And if they do it on their own GitHub repo, that's the same problem you're going to run into if banks just start opening this up to everyone. Because that, that tech is still being built out and that's still ish. being developed on like attaching that identity, as you mentioned, like KYA, how do we attach a person's identity to a verified AI agent? Because at the end, who's prompting this agent to go and do these things? And how do we pass that information back? Because if I look at it right now, what OpenClaw opened up, no pun intended, is the ability for multiple people within an organization to act with the same instance of an AI agent.
lisa
Lisa Durnford
52:23
Yeah. Yeah.
Tyllen
Tyllen Bicakcic
52:41
For example, we have our ticket creation system for our engineers. It uses my API key. So it doesn't matter if I use it, if my co-founder uses it, if our head of product uses it. It always looks like the person that created that ticket was me. But that's not true because first, the AI agent created the ticket. And second, the AI agent might have been prompted by my head of product to create the ticket. So how do I audit to that level of identity all the way down? It's very hard externally. People are still building. We're building things to help that be easier, but it's going to take a little bit. But if you build it internally, you can always say, okay. This user is authenticated through my OLB. Boom. have their identity. They're interacting with my AI agent that I've supported. Boom. I have that. The AI agent is the one making the calls because Payment provided that information. So now I know Tyllen is the authenticated user because you should not be giving your bank account access to someone that's not authenticated. Right? So built-in trust, AI agents there, and then boom. That's that. I think those are the two levels of how to like think about this if you're a bank professional on where one's very scary and the other one is not so scary. very manageable and doesn't change much what you're doing.
Hailey Windham
Hailey Windham
53:57
So you guys have teed up the next section perfectly, talking about authorization, right? Disputes and maybe even loss of ownership. So these are those operational realities. know, agentic banking forces the industry to rethink authorization. So the scenario could be a customer disputes a payment and says, well, I didn't authorize that. Yeah, but the customer did authorize the AI agent. The agent initiated the transaction. So then what happens? You know Lisa, I'll ask you first. How do you think disputes may evolve if AI agents are interacting with financial systems?
lisa
Lisa Durnford
54:35
Yeah, think the way that, I mean, a lot of what we've talked about is... kind of ties into this very well, but how we think about resolving the dispute and understanding the situation that we're reviewing is certainly going to change. Yeah, same goal in mind is, was this an authorized transaction and is this a fair dispute? And of course the merchant typically comes into this conversation as well. So I think with any, and I mean, we've talked about this in the context of PSD2, we're now three compliance. and ensuring strong authentication for different types of transactions, as soon as there's a delay between the instruction or the kind of thought being put out there by the consumer and then the actual transaction that creates sort this opportunity to feel like there's a gray area potentially between what was the intent of the consumer and did they get that and sort of was this transaction authorized by the consumer. And so there are frameworks under PSD2 to think about delayed authorization, merchant initiated transactions, know, having cards on file that you've pre-authorized. So maybe there's sort of an agents on file kind of concept, like you're talking about a bit, Thailand of having, you know, pre-authorized agents and payees in your system. So I think ultimately what I would say it comes down to is, you know, how do we adapt dispute resolution policies to close that gap, to ensure that we have enough context. When we are reviewing the transaction information, do we have enough context to close the gap between the prompt or the instructions that were initially provided by the person and what was actually executed by the agent? And if there wasn't maybe a human in the loop, and obviously that would certainly help to have that deterministic interaction of OTP, for example, if there isn't that, how would we get comfortable with feeling confident that the actual transaction matches you know, the goal or the intent of the initial instructions. Part of solving for that, of course, is having that prompt, having that context shared in the transaction record somehow. So it's part of the audit to audit the thinking, which I thought when you said that earlier, it reminded me of dispute resolution. So to actually have that context is great. So I think one thing that stands out to me, though, as a challenge, like we could all agree that this is the process, we need this to be shared, but whenever it terms of data sharing, we see this, course, with transaction data at its core, is standardization and ensuring that this can be done efficiently and at scale. We have wire transaction standards now. We have card network standards. We attempt to standardize this at immense scale across so many different systems and interactions. And I think we're seeing that developing now, and Thailand would love your perspective, Card networks for example are certainly coming out with protocols to try to implement on both the know purchaser the issuing side and the acquiring merchant side and I think merchants certainly have a very interesting role to play here when it comes to kind of mitigating the risk of a dispute and managing risks to consumers when enabling you know agentic payments at the merchant level and I think Visa and MasterCard have both put out their strong opinions there's a lot of standardized protocols being developed to try to add this element of standardization to the messages, to the data sharing, to try to make this process efficient. But I think it seems like the industry broadly agrees, which makes sense that we can't really manage the risks here and we can't effectively resolve any dispute if both parties don't have sufficient context to understand what the intention was and what actually happened.
Tyllen
Tyllen Bicakcic
58:38
Yeah, there were the intention that like, is the, it makes me think about what's the, sometimes you can just think of AI agents as like children or employees, right? And you think, how do you, I'm going to give my card to my kid and they're going go and spend what they want. I actually don't even know how fraud handles this, but it would, I would assume the onus is on me at the end, but. Can I go and dispute it? How would they know? could say, let this go. But then it comes back to, who is Thailand? Who is Thailand giving his card to? Has he ever filed fraud like this before? And then the same thing with an AI. It's safe to say that the top models are getting to the point where they can do what you want. But what model are you using? Maybe you're not using a top model. Maybe who's the vendor that's asking you for this AI agent or that's giving you this AI agent. if it's my own AI agent that I've deployed using open-claw, let's say, you can't blame an open source project, right? So at the end, it's your card. Who's the dispute going to go to? I think if I think like, can you pass prompts in, can you, now it makes sense to me why MasterCard or whoever called it verifiable intense. but I actually still don't know what that means. Partially I need to dive into it, but also like, can the fraud.
lisa
Lisa Durnford
59:57
Mm-hmm.
Tyllen
Tyllen Bicakcic
60:06
You know, like a sardine type C, what is the conversation that led to this AI taking this action? I'm sure in that case, it would make seeing if this is fraud a lot better or not. But it's like, can we standardize getting that information over? Can we standardize which agent acted on whose behalf? Can we standardize authorization for those things? If you can get all of that, it's way easier, right? But as you said, the bottleneck is who's going to sit, you know, how do you enforce that?
lisa
Lisa Durnford
60:35
Yeah, yeah, how do you build that oversight from a centralized perspective? It's always the nacha of payment instructions. Which might be real, yeah, exactly.
Tyllen
Tyllen Bicakcic
60:43
Yeah. Which might have to be the case. Hailey Windham (1:00:49) I think what I was thinking about, especially in regards to disputes with this entire conversation was when I worked disputes, I loved whenever we would get an honest ATM dispute. Cause I was like, I don't have to wait on anybody to provide the documentation. I don't need anybody to provide the video evidence. I controlled it all and it was a click, bing, bang, boom, whatever, pull over all the data I needed. And then I had my decision and the decision was usually.
lisa
Lisa Durnford
60:52
See you.
Hailey Windham
Hailey Windham
61:19
We could find the car that drove up to the ATM, look up the license plate. Does that match who is registered to the account? If not, does that person live in the household of the other? And so then being able to bring back that information to our customer, our member and say, hey, do you know this person? This person is who authorized it. Obviously not sending over the picture in an email because that ends up on Facebook and then you have a whole nother can of worms issue. But I was thinking, you know, as you know, we talked in the beginning about the governance of it, truly that you own it, you bring it on in whatever level your financial institution is ready for. When you have those transactions where I didn't do this or I didn't authorize it, but then you have all of the rationale for why it was accepted, why it was processed. Here's that thinking, here's why we did it or why the AI did it. essentially got your honest ATM dispute where you've got all the proof there that you needed in order to know how to move forward. And so I looked at it and I was more excited because if you dispute this, I already know I have all the data I need to prove that either it was authorized or that someone, you know, exploited a gap or a vulnerability in the system that then we can also pinpoint and fix moving forward.
Tyllen
Tyllen Bicakcic
62:44
Yeah, it's that's the whole hope is it can be a lot easier once we standardize all of this or once, you know, developers or whoever's building the systems are even told how this thing needs to happen in order to get mass market adoption because no one's going to then feed their credit cards or whoever. I think it's also on us to protect the consumers. know a lot of people don't feel safe giving their credit cards to an AI agent. But I know people are feeling safer and safer because as mentioned earlier, you know, our VP of innovation keeps asking us at a bank, why can't this do more? Why can't this do more? The trust is coming from the intelligence that this thing has. So then the why it can't do more is how the governance of these things. But I will say people are going to start doing it and they're already doing it because it's just too valuable at this point. As with everything with AI, think it accelerates everything. And in a way, we're already seeing that with governance.
Hailey Windham
Hailey Windham
63:51
Completely agree. Do you guys have five minutes to finish it up? I know we're over time. Okay, sorry about that.
Tyllen
Tyllen Bicakcic
63:55
I won't. Yeah. No, this is fun. Very, very fun. Feels like we could have that show. This was cool. Yeah, I don't know, a conference or something. Yeah.
Hailey Windham
Hailey Windham
64:00
Ahem.
lisa
Lisa Durnford
64:06
A show.
Hailey Windham
Hailey Windham
64:08
I agree. Yeah, I totally agree. Maybe we'll put this up as a conversation. I'm actually thinking like, I'm going to bring you in on a webinar, Tyllen, if you don't mind. Okay, great. Great. Okay, so moving on, let me write this timestamp down.
Tyllen
Tyllen Bicakcic
64:20
No, that would be awesome. Would love it. Yeah.
Hailey Windham
Hailey Windham
64:29
Right, so think what we are understanding as a part of this conversation is that technology is moving quickly. The governance frameworks often we know take longer, but it doesn't mean that we have to stop innovating and it doesn't mean that we don't need to prepare for this next phase of banking. I'll just call it that. It's the next phase. It's the phase that we technically should already be living in, but unfortunately we have lagged behind, especially as it comes to governance and regulatory frameworks and what are we and how should we and blah blah blah. So, Tyllen, for you, I want to ask, you know, what excites you most about the future of AI in banking?
Tyllen
Tyllen Bicakcic
65:10
One of the things, honestly, it's going to be a sound very cheesy, but we do this every Friday where it's like the people smiles when they use this product or when they use AI in banking. It's just like, I don't think I've smiled with my banker in so long because every time I go there, it's usually like, okay, what mortgage rate are you going to give me? Like, what is, what are you going to tell me that I should be worrying about? What are you going to try to upsell me? Like. This is a way different experience that people have now because it's just more natural. It's, I think that my daughter is gonna grow up in a world where she's definitely gonna talk to her bank and it'll definitely be AI driven and glad her dad had something to do with that. So I think seeing all of these, it's not... I think one of the common misconceptions is that this is for like the next generation, the next generation. It's like, no, this technology does not care on what generation you are. And I have way more smiles from people above 70 years old using this that makes me so happy. That is like, I know we're on the right path and I know this is how people will bank. So our goal every Friday is we show a different smile of a different customer using the product. And we say we're aiming for a billion of these. So that's what we're working. And that's what makes me happy.
Hailey Windham
Hailey Windham
66:36
I love that. And you answered my next question about the misconceptions. I think that's great. It's this is what we can do today. This is possible. And it's not just for the younger bankers that are coming up. It's for you today. And I think that's great. And again, I mean, I'm not a young banker and I found you a use case that I would love to have this at my FI. So yeah, love that and appreciate it. And for Lisa, what advice would you give compliance leaders who are beginning to evaluate AI systems?
lisa
Lisa Durnford
67:08
Yeah, I think if I could zero in on one thing is talk to teams outside of compliance about this. When you're thinking about doing a risk assessment of this new product or whatever it might be, when you're thinking about your enterprise risk program, talk to your product teams, talk to the engineering teams that are building your interfaces, talk to your security teams about authentication and how you're thinking about this. Talk to other folks in the industry. I think we always talk about the risks of managing compliance. in silos and having disparate teams that are kind of all looking at different signals and trying to manage these separately and how that's not the way to go. We have to bring these conversations together. I think that applies so much to this conversation. So risk teams and compliance teams, go meet other teams at the bank and talk to them about how they're thinking about this and what they're seeing and how to build risk systems together to best manage this and that will certainly help build kind of that strongest program to make sure you're actually enabling this and creating a strong business around this functionality and not just thinking about it from a risk perspective.
Hailey Windham
Hailey Windham
68:32
Love that. I love that perspective so much. And then last question, and I'll let you both go. If a bank leader is listening to this episode right now, what is the first conversation they should be having internally, do you think, Tyllen?
Tyllen
Tyllen Bicakcic
68:49
What bank is going to do this that's going to start taking my deposits? Because this is like, quite honestly, like this is kind of, we view this as a very much a go out, acquire customers type thing. And there has never been a stickier technology product than this ever. So who's going to do it in the banking area first? And I will also say once people start having conversations with an AI at your bank, it's very hard for them to leave because all those conversations, all that, it's just like when you talk to Claude versus chat GPT, you probably go to it for different questions based on all the information that has on you beforehand. So it's a real. You know, if I'm the Banksy, I'm talking to our guys and I'm saying, okay, how long do we have until this actually starts becoming a thing? Because we are launching with banks now, we are launching partnerships with LLBs. And I think that it's like, there will be the difference between the types of experiences we see with people who go and handle agentic banking versus not. But yeah, that's my two cents of it.
Hailey Windham
Hailey Windham
69:58
Lisa, any last parting thoughts on this conversation?
lisa
Lisa Durnford
70:03
I think I would just encourage as I always do with compliance teams, encourage them to think about it as how are we going to help our customers use these tools in the best way possible, in the safest way possible. You know, that's the question. And then what does that mean for you and your teams, depending on maybe where you sit in this process, but as a bank, who are the stakeholders that I need around the table to ensure that we're building the best strategy for enabling this and just, you know, encourage everyone to think about this as an opportunity and not necessarily a risk that we just need to mitigate. So really diving in and ensuring that you're asking the right questions and always prioritizing you the customer in how you set this up.
Hailey Windham
Hailey Windham
70:56
agree more. And, Tyllen, any last thoughts?
Tyllen
Tyllen Bicakcic
71:01
No, I love it. I love this whole space. love, mean, there's a reason why I also love working with community and regional banks. Like there's a reason why I live in a small town. understand how important like the personal connection is. I think don't take away that AI takes that away. Like it's not, it's a very personal technology and it is something that improves people's lives dramatically every day. Don't be scared of it. It'll help. People will help people get you comfortable with AI. AI won't help people. That's why we're here. At the end, it's all still a people business.
Hailey Windham
Hailey Windham
71:41
For sure, for sure. So if there's one thing I hope you listeners take away from this conversation, it's this, know, AI is coming into banking whether we're ready or not. The question isn't whether AI will participate in financial services, but the question is, will banks design it responsibly? Because fraud is still human driven. Even when AI is used to commit fraud, it's directed by people. And that means AI designed well, can actually reduce it. So it can enforce the policy consistently, it can monitor behavior in real time, it can operate within guardrails that humans sometimes miss under pressure. We've all seen that. But only if we build it intentionally. Agentic AI isn't about replacing fraud teams at all. It's not about cutting compliance out of the process. It's about embedding trust directly into the system. And so when money moves, especially in real times, governance moves with it. For community banks, isn't something to fear. Like Tyllen said, it's an opportunity to leapfrog legacy thinking and design infrastructure that's safer, smarter, and more defensible. Because if we don't design AI agents inside the banking system with the right controls, someone else will design them outside of it. And that's not the future that any of us want. So we'll part with, stay vigilant, stay informed, and keep moving fraud forward. Thank you guys so much for joining.
lisa
Lisa Durnford
73:04
Thank you.
Tyllen
Tyllen Bicakcic
73:04
Thank you.
Hailey Windham
Hailey Windham
73:06
And before we hang up, I'm just gonna do a quick little welcome back to part two, just in case we split this up into two because it's so long. And I'll just say, guys, welcome back, and then you can just say whatever real quick, if you don't mind. All right, so welcome back to part two. We are very excited. Obviously, we got really deep into the conversation, but we weren't quite finished. So now we're gonna move on into disputes. And then where we are really excited, for where agentic banking is today, where it's going, and where you are ready for it, whether you believe it or not. So, Tyllen, Lisa, welcome back to the show.
Tyllen
Tyllen Bicakcic
73:44
Thank you. Nice to be here.
Hailey Windham
Hailey Windham
73:48
Perfect, okay great. Thank you so much guys, I really appreciate it. I'm so sorry for the technical difficulties earlier. I am so embarrassed by that.
Tyllen
Tyllen Bicakcic
73:57
No, no, literally
Host
Hailey Windham
Hailey Windham
Fraud Forward, Sardine

Guests

Tyllen
Tyllen Bicakcic
Co-Founder at Payman
lisa
Lisa Durnford
Head of AML Compliance