A Framework Is Now Available That Allows Agents To Act Autonomously With Each Other Toward Task Completion!
3 main points
✔️ Propose role-playing, a new framework for improving cooperation between agents
✔️ Create two large dialogue datasets to test the performance of role-playing
✔️ Open the library associated with the agent implementations used in this paper Sourcing
CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society
written by Guohao Li, Hasan Abed AI Kader Hammoud, Hani Itani, Dmitrii Khizbullin
(Submitted on 31 Mar 2023)
Comments: Published on arxiv.
Subjects: Artificial Intelligence(cs.AI); Computation and Language (cs.CL); Computers and Society (cs.CY); Machine Learning (cs.LG); Multiagent Systems(cs.MA)
The images used in this article are from the paper, the introductory slides, or were created based on them.
While rapid advances in chat-based large-scale language models, such as ChatGPT, have yielded impressive results in solving a variety of tasks, their task-solving capabilities are heavily dependent on the human ability to guide the agent in the right direction, requiring the user to properly and users need to make their intentions appropriate and accurate prompts.
In addition, creating effective prompts often requires a deep understanding and expertise in the field; for example, someone with no transactional expertise would find it difficult to create appropriate prompts to instruct agents to develop transactional applications.
In this paper, we propose a new agent framework named role-playing to solve these problems, demonstrate its effectiveness through experiments using two large data sets, and support future research by open-sourcing the libraries associated with these implementations. This presentation will describe the paper, which made a significant contribution to the
Role-playing, the framework proposed in this paper, is a novel approach to guide multiple communication agents to autonomous task completion.
This approach will be an approach designed to solve tasks through role-playing in which two people, the AI assistant and the AI user, collaborate when given a task.
As an example, the figure below shows the entire role-playing process when a user is given the task of developing a trading bot for the stock market.
In this example, at the start of the role-playing session, the AI Assistant andAI User are assigned the roles of Python Programmer andStock Trader, respectively, while the AI User performs appropriate planning as a Task Planner and the AI Assistant AI user is the task executor and responds back to the AI user while executing the planned plan.
Thus, we can see that role-playing is structured in such a way that the user simply assigns roles to agents and the agents automatically guide each other to a plan for task completion.
In addition, the authors have open-sourced the libraries associated with the implementation of role-playing, making it a valid approach that can be used by anyone in situations where the user does not have the expertise to accomplish the task, which has been difficult in the past. The new approach is a very effective approach that can be used by anyone who does not have the expertise to accomplish the task.
Unlike the usual language model, prompt engineering in this method is only done for the initial role-playing task specification and agent role assignment; once the conversation phase begins, the AI assistant and AI user automatically prompt each other until the task is completed.
In this paper, we call such prompt engineering Inception Prompting, which consists of three prompts: Task Specifier Prompt,Assistant System Prompt, andUser System Prompt.
As an example, the figure shows Inception Prompting for AI Society role-playing, a conversational data set discussed below.
The Task Specifier Prompt contains information about the roles of the AI assistant and AI user in the role-playing session, allowing the agent to take ideas as input and generate specific tasks.
The Assistant System Promt and User System Prompt contain information about assigned tasks and roles, termination conditions, and constraints and requirements to avoid undesirable behavior, which play a very important role in achieving autonomous cooperation between agents It can be seen that
In this paper, we collected large conversational data sets for validation and created two datasets, named CAMEL AI Society andCAMEL Code, based on which we analyzed them.
We then conducted a validation through role-playing AI assistant and AI user simulations using two gpt-3.5-turbo agents, and discussed the challenges we found.
CAMEL AI Society
To create the CAMEL AI Society, we implemented a series of steps in our approach
- To begin, have the agent present a choice of AI Assistant and AI User roles (this is done by giving the agent specific prompts designed to elicit these roles)
- Then instruct the agent to generate a range of possible tasks that could be solved by role-playing the generated AI assistant and AI user roles
- After generating a range of resolvable tasks, use the Task Specifier Prompt passed to the agent to make the tasks more specific
A summary of the prompts for this series of steps is shown in the figure below.
This prompt generated 50 different AI assistants and AI users, 10 tasks, and a total of 25,000 conversation sets.
The roles of the generated AI assistant and AI user are shown in the figure below.
To generate the CAMEL Code, we performed the following approach using the same approach as the CAMEL AI Society
- To begin, have the agent present a list of programming languages and domains
- Then instruct the agent to generate a set of possible tasks that could be solved by a programmer who is an expert in a particular programming language, working with people in a particular domain
- Make the generated set of tasks more specific using the Task Specifier Prompt
This prompt generated 20 different programming languages, 50 different domains, and 50 tasks for each combination of programming language and domain, for a total of 5,000 conversation sets.
Challenges and Observations
Using the aforementioned data set, we conducted role-playing simulations and found the four issues shown in the figure below.
The four issues are as follows
- Role Flipping: AI Assistant and AI User Switching During Conversation
- Assistant Repeats Instruction: Assistant repeats support
- Flake Reply: AI assistants give vague replies like "I will... AI assistant gives vague replies like "I will...
- Infinite Conversation: AI assistant and AI user enter into an endless loop of meaningless conversations.
The resolution of these issues may provide clues for the development of more effective AI systems, and this verification has provided valuable insights.
How was it? In this issue, we have described a paper that proposed a new agent framework named role-playing, demonstrated its effectiveness through experiments on two large data sets, and made a significant contribution to supporting future research by open-sourcing the libraries associated with these implementations. The paper was open-sourced to support future research.
While the content of this paper provides valuable insights into the field of large-scale language modeling and communication agents, there are some challenges , such as the fact that the tasks generated by role-playing are large and diverse and thus require a lot of domain knowledge to assess task completion ability, so future progress is It will be interesting to see how it progresses in the future.
The details of the prompts and datasets presented here can be found in this paper for those interested.
Categories related to this article