How to protect customers from ChatGPT-Driven Scams
Chat-GPT has the potential to reduce the cost of creating scams. Scammers could quickly and at scale generate realistic phishing messages, fake customer support, or social engineering attacks. Scammers just got more efficient, and the CAC (Cost of Acquisition) for a scammer to make money has been massively reduced.
The best way to fight AI is with AI guided by the best fraud and compliance professionals and by working together as an industry.
Fighting scams takes teamwork. We can catch more scams before they happen if we understand the types of scams and what to look for.
Scam types are made easier with ChatGPT
- Phishing scams: Fraudsters could use ChatGPT to generate realistic-looking messages that trick people into revealing sensitive information, such as passwords or credit card numbers. For example, they could use the model to generate convincing messages that appear to be from legitimate organizations like banks or email providers.
- Fake customer support: ChatGPT could create fake customer support bots that appear to be from legitimate companies. These bots could then ask for personal information or payment details, claiming to be needed for troubleshooting or verification purposes.
- Fake news or information: ChatGPT could generate false news stories, fake product reviews or fake testimonials, which can be used to scam people. By creating convincing narratives, fraudsters can influence people's beliefs and behavior.
- Social engineering attacks: Fraudsters could use ChatGPT to generate personalized messages that appear to be from a trusted source, such as a friend or a family member, to manipulate people into performing an action or divulging sensitive information.
Ways to detect the scams
As fraudsters use increasingly sophisticated methods to perpetrate scams, companies must stay updated with the latest detection and prevention techniques.
Here are some potential methods for detecting scams generated by models like GPT:
- User Behavior: One approach is to analyze the message's text or communication for specific fraud indicators. Companies could use natural language processing (NLP) techniques to look for patterns in the language that suggest fraudulent intent. AI has a certain “way” of phrasing things. Detect that, and it could be a scam in progress.
- User Device: Another approach is to analyze the users device associated with a particular user. This could involve looking for activity patterns that tend to be fraudulent, at account onboarding, funding, or during a payment.
- Transaction Monitoring: This could include monitoring user behavior for anomalies such as sudden spikes in transaction volumes, changes in transaction types, or deviations from established activity patterns.
- Machine learning: Machine learning algorithms can be trained on large datasets of known fraudulent communications or transactions to identify patterns or signatures associated with fraud.
- Collaboration: Collaboration helps identify persistent bad actors. By analyzing the behavior of similar users or accounts, companies can stop repeat offenders.
It's worth noting that fraudsters are constantly evolving their techniques and adapting to new security measures, so companies need to remain vigilant and adapt their fraud detection strategies accordingly. It's also important to be aware of the limitations of any given approach, as fraudsters may be able to find ways to circumvent even the most sophisticated security measures.
The ideal combination: Device, Behavior, ML, Humans and Collaboration.
Sardine combines device and behavior insights with cutting-edge ML models to catch more fraud.
Transaction, Device and Behavior
When you understand a device and how it is used, you can spot the hidden clues scammers leave behind. At Sardine, we are the only provider that can detect that the elderly customer has used remote screen-sharing software (often used by scammers to “assist” the elderly). One customer has been able to reduce their fraud by 7x and now recommends Sardine to all of their partners and banks because of this unique ability.
A scam won’t show in a transaction until the payment has already aurhorized. The device and user behavior allows Sardine spot signals for risk before a transaction happens.
Machine Learning and AI
The best way to counter generalized AI is with specialized AI.
Sardine’s Proprietary machine learning has trained to detect anomalies. Sometimes, something looks weird. Historically finding these patterns was left to fraud teams who did manual work and had to churn through mountains of data.
Sardine will flag them automatically, saving time and effort for clients.
Sardine spots anomalies from a baseline, and can automatically adjust the response we give clients over time. Our models are constantly being retrained and back tested. As scams evolve, Sardine is evolving too.
Fraud and compliance professionals will often share war stories and lessons learned, but its much harder to share data.
Sardine will soon launch a utility for data sharing among banks, fintech companies and crypto wallets to catch more fraud.
We work better together.
If you want to catch more fraud, contact us.