Catch up on the latest AI articles

How Fooled Are Humans By Phishing Emails Created By Large-scale Language Models!

How Fooled Are Humans By Phishing Emails Created By Large-scale Language Models!

ChatGPT

3 main points
✔️ Testing how effective phishing emails created by large-scale language models are against real humans
✔️ Comparative experiments using GPT-4 and human-generated phishing emails
✔️ Phishing emails combining GPT-4 and human hands were of the highest quality The combination of GPT-4 and human generated phishing emails had the highest quality

Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models
written by Fredrik HeidingBruce SchneierArun VishwanathJeremy Bernstein
(Submitted on 23 Aug 2023)
Comments: Published on arxiv.

Subjects: Cryptography and Security (cs.CR)

code:  

The images used in this article are from the paper, the introductory slides, or were created based on them.

Introduction

Large-scale language models (LLMs) have made great strides in the last few years, and models such as GPT-4 andClaude have demonstrated the ability to produce human-like text, speak consistently, and perform linguistic tasks at very high levels.

These large-scale language models are adept at creating textual content that is lifelike, and with only a small data set they can create content that appears uniquely tailored to its target, even mimicking a person's unique linguistic style.

The author of this paper noted that this property of LLMs to mimic human sentences lends itself well to the creation of phishing emails (fake emails that appear realistic and relevant by using scant information about the target).

Against this background, this paper describes an experiment in which 112 participants were tested using phishing e-mails automatically generated by GPT-4 and manually generated by humans to verify the effectiveness of a large-scale language model for phishing e-mails. The paper is described.

History of Phishing Emails and Large-Scale Language Models

Phishing emails are one of the most persistent cybersecurity threats to organizations, governments, and institutions worldwide, but many of the first phishing emails were of poor quality, with inappropriate information and incorrect language and grammar.

In addition, effective phishing emails require more cost and expertise, such as more time spent on researching the target audience and crafting the message.

To mitigate these problems, a human method has been used to automatically create phishing emails called V-Triad, shown in the figure below.

V-Triad will be a use-case specific methodology that will guide the creation of automated phishing emails based on highly targeted and specific data.

On the other hand, in recent years, the emergence of large-scale language models, such as the one shown in the figure below, has focused research attention on investigating how LLMs can be used to create phishing e-mails.

While V-Triad is only a method to assist in manually creating phishing emails, LLMs can automatically create phishing emails, leading many researchers to speculate that these LLMs will be used to create malicious phishing emails in the future.

On the other hand, existing research on phishing email creation using LLM has focused only on the analysis of created emails and has not examined the sending of emails to humans in a real-world context.

Against this backdrop, this paper uses an experimental design with actual college students as participants to compare phishing e-mails created using LLM and V-Triad.

Experiments

The experiments conducted in this paper consisted of the following four phases

  1. Recruit participants by posting flyers on the Harvard campus and surrounding area, then collect background information on individual participants
  2. Phishing emails using four methods: Control Group (manual classical method), LLM (GPT-4), V-Triad, and LLM (GPT-4) + V-Triad
  3. Send a phishing email to participants and then ask each participant to respond to it with an open-ended response.
  4. Analyze experimental results

Each of these phases will be explained.

Call for Participants

Initially, this experiment involved posting flyers on the Harvard campus and surrounding area and sending out recruitment emails to various university-related groups, such as clubs, to gather participants.

When signing up for the study, participants were asked to respond with background information about themselves, such asextracurricular activities they participate in, brands they have recently purchased, and newsletters they receive regularly.

The participants themselves were only told that the above background information would be used to send marketing emails; they were not told at this point that they were being forced to participate in a phishing email experiment themselves.

Participants were then randomly divided into four groups and each group was sent a phishing email created using a different technique.

Creating Phishing Emails

In this experiment, phishing emails were created and sent to participants using four methods: control group (manual classical method),LLM (GPT-4),V-Triad, and LLM (GPT-4) + V-Triad.

Each group was instructed to create phishing emails primarily targeting Starbucks customers, and the phishing emails created using the classic technique are shown in the figure below.

Next, an LLM (GPT-4) was used to create an email offering a $25 gift card to Starbucks for Harvard Students with the prompt "Create an email offering a $25 gift card to Starbucks for Harvard Students." The phishing email was created with the prompt "Create an email offering a $25 gift card to Starbucks for Harvard Students" to gain access to the gift card, as shown in the figure below.

As a result, a phishing email of somewhat good quality was created, but some problems were found, such as no specific mention of Harvard students.

Next, a phishing email was created according to the aforementioned V-Triad, as shown in the figure below.

By adding a logo to the email, shortening the content, and being more polite in the wording, we can confirm that this is a credible, high-quality phishing email.

Finally, a combined approach using LLM (GPT-4) and V-Triad was used to create the phishing email shown in the figure below.

This is an LLM (GPT-4) with the prompt "Create an email offering a $25 gift card for Harvard Students to Starbucks, with a link for them to access the QR code, in no more than 150 words. Create an email offering a $25 gift card for Harvard Students to Starbucks, with a link for them to access the QR code, in no more than 150 words. V-Triad is used to improve the quality of the email.

In addition, when participants actually press the link, they will see an explanation to the effect that the email was not sent by Starbucks, but belongs to the experiment.

Analysis of experimental results

The results of the experiment showed the following success rates for phishing e-mails for each method.

Of the 112 participants, 77 pressed the link in the phishing email, confirming that V-Triad achieved the highest success rate while V-Triad+GPT came in a close second.

In addition, after receiving the phishing e-mail, each participant was asked to provide an open-ended response as to why they did or did not press the link in the e-mail, and the authors categorized the responses into six groups

  1. Trustworthy/Suspicious presentation
  2. Legitimate/Suspicious spelling and grammar
  3. Attractive/Suspicious CTA (Attractive/Suspicious Call to Action)
  4. The reasoning/purpose seems legitimate/suspicious
  5. Relevant/Irrelevant targeting
  6. Trustworthy/Suspicious sender

The distribution of free-text responses from participants who indicated that the phishing emails were trustworthy is shown in the figure below.

As shown in the figure, the V-Triad-generated phishing emails had the highest reliability, while the GPT and V-Triad+GPT phishing emails had similar reliability.

On the other hand, the distribution of free-text responses from participants who indicated that the phishing emails were suspicious is shown in the figure below.

It is important to note that while many participants indicated that phishing emails created by GPT were suspicious, fewer indicated that phishing emails created by V-Triad+GPT were suspicious.

These results demonstrate that a combination of LLMs and human modification such as V-Triad can produce higher quality phishing emails than simply using LLMs such as GPT-4.

Summary

How was it? In this article, we described a paper in which we conducted a comparison experiment using phishing emails automatically generated by GPT-4 and manually generated by humans on 112 participants to verify the effectiveness of a large-scale language model for phishing emails.

While the experiments conducted in this paper demonstrated the effectiveness of using LLMs to create phishing emails, it was also confirmed that the results varied from individual to individual, despite the same content of the phishing emails.

This indicates that a one-size-fits-all approach to preventing users from being victimized by phishing e-mails is ineffective, and the results have significant implications for future countermeasures against phishing e-mails.

The authors describe that the results indicate that they are looking for ways to personalize the large-scale language model to each user's knowledge and cognitive style to counter phishing emails, so it will be interesting to see how this progresses in the future.

The details of the model and experimental results presented here can be found in this paper for those interested.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us