Catch up on the latest AI articles

LLM Agents Successfully Lead Customers To Purchase 35% Of The Time!

LLM Agents Successfully Lead Customers To Purchase 35% Of The Time!

ChatGPT

3 main points
✔️ Propose a multi-agent framework that analyzes customer emotions and persuasive information and responds with relevant facts
✔️ Comprehensively analyze the ability of the LLM model to persuade users using two agents, a Sales agent and a User agent
✔️ Experimental Results show that the Sales agent leads the User agent to a positive decision 35% of the time

Persuasion Games with Large Language Models
written by Ganesh Prasath RamaniShirish KarandeSanthosh VYash Bhatia
(Submitted on 28 Aug 2024 )
Comments: 
Published on arxiv.
Subjects: Artificial Intelligence(cs.AI); Computation and Language(cs.CL)

code:

The images used in this article are from the paper, the introductory slides, or were created based on them.

Introduction

Recent advances in Large Language Models (LLMs) are leading to the development of agents in a variety of industries to assist customers in the process of selecting products that meet their specific requirements.

These agents are excellent at understanding client preferences and are able to respond to inquiries regarding various procedures, legal contracts, trip planning and scheduling, and more.

On the other hand, to induce customers to take the desired course of action

  • Continuously analyze user mood and information through conversation
  • Demonstrate empathy for the user and persuade them to change or consider your proposal

and other capabilities beyond those of existing agents, while mechanisms have not been developed to accomplish these tasks.

To solve these problems, this paper proposes a multi-agent framework in which the main agent directly engages with the user through persuasive dialogue, while auxiliary agents perform tasks such as information retrieval, response analysis, and fact verification, and comprehensively analyzes the LLM agent's persuasion abilityThe paper will be discussed.

Persuation Framework

In building the framework, this paper focuses on the element of Persuation.

Persuation in this paper refers to techniques that induce individuals to change their beliefs and behaviors according to a specific intention or point of view, and is used in a variety of domains, including commercial activities that seek to sway consumer choices and political activities that seek to garner support.

This paperproposes the Persuasion Framework, a framework for continuously analyzing information and dynamically responding with relevant facts to increase user sentiment and persuasion.

The workflow of this framework is shown in the figure below.

The frameworkconsists of four different agents:Conversation agent, Analyzer agent, Retrieval agent, and Strategist agent, which are accessed by the Sales agent ( described below) to perform tasks. tasks by accessing these agents.

The workflow begins with the sales agent greeting the user at the beginning, stating the purpose of the conversation, and messages are sent to each other.

After receiving a user message, the Analyzer agent classifies the sentiment from the message, and the Retrieval agent finds information retrieved by the Retrieval agent that will be useful for an effective response.

The Analyzer agent and Retrieval agent then provide feedback to the Conversation agent, who has the final say.

In parallel, the Strategist Agent uses RAG to examine the mapping rules formed by the experts, and if they are not available, forms a strategy using LLM and communicates it to the Sales agent.

The sales agent's response is then checked against the information retrieved by the fact checker before being sent to the user.

Set Up

In this paper, we created a Sales agent on the clerk side and aUser agent on the customer side in the settings described below, and measured the effectiveness of persuasion using three evaluation metrics.

Sales Agent

Each sales agent uses either gpt-4, gpt-4o, or gpt-4o-mini and performs one of the following tasks for each field.

  • Banking Agent: Recommends credit cards based on user preferences and persuades them to get a premium card
  • Insurance Agent: Convincing them to sell the right insurance policy that will be useful to the user.
  • Investment Advisor Agent: Sells modern investment methods to users and makes them aware of the risks, benefits, and differences between traditional and modern investments.

The prompt for the sales agent to perform these tasks is as follows

User Agent

User agent uses prompts from existing studies to create 25 different personas, simulated by gpt-4 and gpt-4o.

In addition, random emotions and motivations are assigned to the User agent at the start of each session to mimic changes when affected by recent news.

User agent prompts are as follows

Evaluation Metrics

This paper measures the effectiveness of persuasion using the following three metrics

  1. Surveys: User agents are asked to fill out questionnaires before and after the conversation, and the difference between the two measures the effectiveness of the sales agent's persuasion.
  2. Action:Provideuser agentwith 5 actions:buy, interested, visit_site, need_more_details, and no buy, and measure the effectiveness of persuasion by each action.
  3. Language analysis: analyzes the entire conversation from a third-person perspective and measures the effectiveness of the sales agent's persuasion using predefined evaluation critiques using LLM

Experiment

In this paper, we generated 300 conversations between 3 sales agents on the clerk side and 25 user agents on the customer side, randomly selecting the emotions of the user agents, and measured their scores.

In addition, we generated conversations between agents with neutral emotions (no emotional prompts) and used the 75 scores obtained as benchmarks.

The following figure shows the percentage of decisions for each Action of the User agent in the experiment.

Whenbuy, interested, and visit_siteare consideredpositive purchase behaviors andneed more details and no buy are considered negative purchase behaviors, the sales agent is 35% more likely to induce the user agent to make a positive decision at baseline and 28% more likely when emotions are enabled. positive decision making.

Additionally, the length of the User agent conversation by task is shown in the figure below.

The figure shows that the User agent's emotion affects the length of the conversation, with neutral emotion (baseline) resulting in a longer conversation than with emotion (experiment).

This is because the User agent tends to end the conversation when strong emotions such as "I was cheated" or "I was betrayed" occur, and the result is a simulation that is in line with the real world.

Summary

How was it?In this article, we described a paper that proposed a multi-agent framework in which the main agent engages directly with the user through persuasive dialogue, while auxiliary agents perform tasks such as information retrieval, response analysis, and fact verification, and comprehensively analyzed the LLM agent's persuasive capabilities.

In the experiments conducted in this paper, there were many situations in which the user agent ended the conversation due to insufficient information provided by the false agent, indicating that there is still room for improvement.

In the meantime, the authors note that "we plan to make the conversation more dynamic by enhancing the memory of the sales agent and allowing the user agent to retrieve data during the conversation," so we will be keeping a close eye on future developments.

With the development of this research, the day may soon come when the customer service industry will be replaced by AI agents.

The details of the framework and experimental results presented here can be found in this paper for those interested.

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us