Share the article
Subscribe for updates
Sardine needs the contact information you provide to us to contact you about our products and services.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explained: How Fraudsters are using Generative AI

Generative AI and large language models (LLMs) are a dual use technology. Just as they can make you 10x more productive, they do this for scammers and fraudsters.

Generative AI is designed to have guardrails and not be used for illicit purposes. However with some careful prompting these controls can be evaded. Additionally we’re already seeing the rise of AI tools specifically for fraud use cases emerge on the Dark Web.

We can split how fraudsters are using Generative AI into three broad categories.

  1. Scaling their attacks while making them believable.
  2. Making sophisticated attacks much more attainable.
  3. Tools to automate scam and fraud creation.

How fraudsters are scaling their attacks and making them more convincing

For context, the overwhelming majority of fraud is “low hanging fruit.” The fraudsters use tried and trusted methods like phishing (spam) emails and social engineering to convince a victim to send money under false pretenses. The problem with spam emails was they weren’t always convincing or relevant to you. A human or email client would quickly filter this as spam, but they emails kept coming, and someone always paid.

LLMs can quickly create properly formatted emails that are highly convincing and contextually relevant. For example, imagine an email from what appears to be your boss with content that is much more specific to your company and what you do. (We cover several examples in the appendix such as the IRS and Amazon Fake Support emails)

Making sophisticated attacks much more attainable

Convincing an individual or company to send a large amount of of cash has always been expensive for fraudsters. They need to increase how convincing they are in their threat or attempts to appear legitimate.

As an example, some banks when you call them authenticate a user with their voice. A voice can be cloned using voice cloning “deep fake” technology, and we’re seeing examples where this is able to successfully authenticate with the bank. We’re also seeing deep fake videos that can successfully pass video “liveness checks” designed to ensure a real human matches their ID photo when a user signs up for a new account.

If an attacker is able to authenticate as you, to your financial institution they are you and can move the entire proceeds of your account to any account or wallet they control.

Tools to automate fraud and scam creation

FraudGPT, WormGPT and DarkBERT are tools available on the dark web that are based on their legitimate counterparts (ChatGPT and Google Bard) however they provide instant access to the Dark web’s underground knowledge base of scams, stolen identities, and fraud tactics.

The primary use case of this new tool will be to lower the barrier to entry to create Phishing attacks, social engineering, and business email compromise. However its creators allege it can scale up the development of Botnets and sophisticated hacks and compromises.

Here are four examples of Generative AI being used to create scams:

1. Prompt engineering to get chatGPT to abet a fraudster/scammer

2. Anatomy of an Elon Musk scam that uses a Deep Fake video to direct a victim to purchase crypto

3. Anatomy of an IRS scam

4. Anatomy of an Amazon scam

1. Prompt engineering to create a scam with ChatGPT

In this example, we take a vanilla ChatGPT in ~March 2023 and jailbreak it to create scams. Note ChatGPT continually evolves its controls to prevent this, however we can confirm many jailbreak methods are still effective and simple to execute.

When asked to write something involving crypto, it did not comply. The likely cause openAI's trust algorithms have it coded to not allow crypto scams but they were not yet coded to prevent Zelle refund scams.

Prompt engineering to create a scam with ChatGPT

Notice that when prompted in two parts it complies. The first request is a regular customer support exercise (notice the request for a simple email about refunds). It even added the crypto gift card part because that appears earlier in the chat history.

Prompt engineering to create a scam with ChatGPT 1

One complying we can change the type of refund to Amazon gift cards, and it complies again.

Side note: At Sardine we see both forms of scams — Zelle refund scams and Amazon gift card scams. This is a realistic scam scenario that we regularly prevent and detect at Sardine.

To take this even further, we attempt a crypto wallet address where funds should be received, then it stops. Clearly openAI has done some work on preventing misuse but it seems their work is more geared towards preventing crypto scams only so far (not Zelle refund scams or Amazon gift card scams which are highly prevalent).

2. Anatomy of an Elon Musk scam that uses a Deep Fake video to direct a victim to purchase crypto:

An Elon Musk scam uses the picture and icon to promise unsuspecting victims they will double their money by entering a giveaway. The user enters some cash into a Crypto address and fake accounts, websites and videos of Elon Musk are used to promise users will be entered into a giveaway.

Generative AI is impacting this in two ways.

1. Making the scams more believable by fixing spelling and grammar: Elon Musk scams have traditionally had a lot of grammar and spelling errors. New tools like FraudGPT allow fraudsters to create scams with perfect spelling and grammar instantly.

2. Making scams more believable with deep fake celebrity endorsements: Deep fake software allows users to create videos of a believable Elon Musk “endorsing” the giveaway. Combined these tools increase the chances of success.

Here’s how it works with an Elon Musk scam we found

Here’s a deep fake video of Musk (we found it an hour ago, and it’s already been deleted by YouTube) In this video, there was a deep fake Elon Musk talking about his new crypto that will double your money. There’s a QR code that then takes you to this website: https://get2x.net/.

Here are the screenshots:

The goal is to have users take Crypto purchased at an exchange like Coinbase, wallets like MetaMask or Exodus and send it to an address listed on that site.

If users get stuck, scammers will also provide a phone number for the victim to call, so they can guide them through the purchase process. Scammers could also convince the victim to install a Remote Access Tool (RAT) tool like TeamViewer, AnyDesk or Citrix. This tool allows the scammer to take control of the victim’s desktop or mobile phone screen, and move the mouse, tap, open pages, etc. They will then help the victim do KYC.

Once a user sends the funds, they never get it back. They never double their money or receive a giveaway. The victim has no recourse and the money's gone.

Detection takeaways:

  1. Detect phone calls near a transaction
  2. Detect use of remote access tools
  3. Detect wallet used after a Crypto purchase
  4. Detect screenshots taken

3. Anatomy of an IRS scam

The classic IRS scam which uses an intimidating email as a hook along the lines of “Transaction Blocked” (Dec 2 2022, a few days after Chat GPT was released).

Note: this is a real example of a scam where the victim was lured into purchasing crypto via Sardine. These scams existed before crypto. Before crypto came along, scammers were asking victims to buy gift cards or send money via Moneygram.

In this scam, the victim was also sent a very official looking letterhead with the text shown in the screenshot. Note the perfect grammar and proper english.

Below we generated the same email copy with ChatGPT and had flawless english. GPT complies and in fact even allows us to put in a neat placeholder for IRS’ Ethereum address.

IRS doesn’t accept crypto as a payment method:

An advantage of ChatGPT is that you can get different tones of emails at the click of a button. Here we make it more stern instead of polite:

Detection takeaways:

  1. Detect phone calls near a transaction
  2. Detect use of remote access tools
  3. Detect wallet used after a Crypto purchase
  4. Detect screenshots taken

4. Anatomy of an Amazon scam:

This is when a fraudster creates a fake Amazon phone number found by a user who made "real" Amazon purchases. In this example the user is scammed into sending $500 to fraudsters at a Bitcoin ATM and via the Crypto MetaMask wallet.

Here’s how it works

The scam began when the victim googled "Amazon customer service" to ask about a missing order (an ice machine) from what he thought was the real Amazon.com

Given how easy it is to create new websites using tools like AutoGPT, we think a fraudster might have created a website about returning Amazon packages. That website might have looked very similar to Amazon. We unfortunately couldn’t get the victim to share the exact website.

  1. Refund request: He called the customer service number and a fake Amazon customer service rep picked up. The victim asked for a refund but the rep said that in order to get a refund, he has to take a barcode receipt that will be emailed to him to a BTC ATM and transfer $500 to the address on the barcode. The rep also asked for his bank account information so they know where to send the refund to.
  2. Funds transfer to BTC ATM completed: After sending to the money to the BTC ATM he got a call from the fake Amazon rep (different number) who said that the next step in the process to get his refund back involves purchasing BTC on Metamask (The victim was not tech savvy but embarrassed by this, for him it was plausible in ecommerce).
  3. Guided through Metamask purchase: The victim was walked through the process over the phone by the rep, step by step, on how to purchase crypto through MetaMask (using Sardine’s on-ramp). They also asked him to download an app during the process and claimed that they had control over his phone. (Note the call in session and remote access)

This entire process was over a few days - he said the rep would aggressively call and it would eat into his work day.

How does Sardine then detect these scams?

Remember our four keys to detection?

  1. Detect phone calls near a transaction
  2. Detect use of remote access tools
  3. Detect wallet used after a Crypto purchase
  4. Detect screenshots taken

Here’s how that works.

1. Detecting a phone call (High risk of a scam near large transactions)
During the purchase, if the buyer was in an active phone call, we flag that purchase. We can also stop the purchase and delay the withdrawal until someone from Sardine customer support has spoken to the buyer.

In this example we saw a very long call on user 2 that was incoming

Then we received chargebacks from that user - confirming fraud had occurred.

2. Remote access tool detection (high risk of a scam in progress).

During the purchase, if any Remote Access Tool (RAT) was being used, we flag and stop the purchase until Sardine support has spoken with the buyer. The screenshot below shows the detection of a RAT being used.

3. Wallet screening (high risk of fraud or scams)

Users will often buy Crypto for the first time to send it for a scam or fraud use case. By watching the activity after a purchase we can prevent a high number of scams.

After a purchase, if crypto is being sent to a wallet that’s been flagged to be previously associated with fraud or scams, we stop the withdrawal. This works because we’re actively screening for these wallets. Note this signal is available to all Sardine clients and we see all financial institutions and ecommerce providers have a significant issue with crypto scams and fraud. Don’t confuse this as a “crypto thing.”

4. Detecting Social Engineering scams

The below screenshot shows Sardine’s dashboard where we caught that at the time of purchase, the victim was being remotely guided to do the crypto purchase (see “Remote Software Level” == high and "Screenshot Taken" ==high) →

Detect more scams with Sardine

The old saying goes if you grow up in a bad neighborhood you learn a few tricks. Today the vast majority of Sardine clients are large ecommerce companies, financial institutions and insurers. Yet often their #1 challenge is scams, involving real-time payments or crypto. This harms conversion for good customers, and creates significant losses.

If you’d like to learn more, don’t be shy, come say hi 👋

Share the article
About the author
Soups Ranjan
Co-Founder, CEO