Catch up on the latest AI articles

Prompt Pattern Catalog: Tips For Effective Prompt Design

Prompt Pattern Catalog: Tips For Effective Prompt Design

ChatGPT

3 main points
✔️ Systematizes large-scale language models and prompt patterns for effective conversation
✔️ Introduces a generic, domain-agnostic approach to prompt design
✔️ In particular, organizes the thinking behind the prompt design, problem sets, caveats, etc.

A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT
written by Jules White,Quchen Fu,Sam Hays,Michael Sandborn,Carlos Olea,Henry Gilbert,Ashraf Elnashar,Jesse Spencer-Smith,Douglas C. Schmidt
(Submitted on 21 Feb 2023)
Comments: Published on arxiv.

Subjects: Software Engineering (cs.SE); Artificial Intelligence (cs.AI)

code:

The images used in this article are from the paper, the introductory slides, or were created based on them.

summary

ChatGPT has become very popular in recent years, and it is said that the performance of the responses obtained depends greatly on whether or not effective prompts are given.
There are already several prompt templates available in the market, but this time, we would like to introduce a study that summarizes the thinking behind the design of prompts.
The date of publication is February 2023, so it may not contain the latest information, but it is worth reading as a basic knowledge of prompt engineering that can be used even today. The article is a bit long, but please bear with us.

introduction

This study does not define a specific grammar or description format for prompts but rather organizes the key ideas and practices for designing prompts. This will allow us to design prompts that are reusable for any LLM and that can be tailored to the nuances and expressions of the user.

In this paper, we will organize the problems, key ideas, etc. that the prompts address according to the following structure.
(1) Definition of the problem being addressed
(2) In what context is the problem
(3) Ideas for creating prompts
(4) Specific examples of prompts
(5) Pros, cons, and applications

catalog

A: Systematization of prompt patterns


Prompt patterns are classified as described above. The following sections describe each of them one by one. (Please note that the order of explanations may change due to the nature of the dependencies.)

B: The Meta Language Creation Pattern

(1) Definition of the problem to be handled:
Define your usage of symbols and words other than natural language.

(2) In what context does it matter:
It is a way to communicate matters that cannot be clearly expressed in natural language.

(3) Ideas for creating prompts:

For example, define the meaning of a symbol, word, or description with the structure "X" means "Y". Define the meaning of symbols, words, and statements with the structure "X" means "Y".

(4) Example of the prompt:
"From now on, whenever I type two identifiers separated by a "→", I am describing a graph. For example, "a → b" is describing a graph with nodes "a" and "b" and an edge between them. If I separate identifiers by "-[w:2, z:3]→", I am adding properties of the edge, such as a weight or label ."(quoted from the text)
It is a good idea to define an unambiguous and unambiguous notation. For example, a → b represents an edge between node a and node b, and "→" when written as "-[w:2, z:3]→" means a property of the edge. In this way, you can define your way of representing the graph structure.

(5) Pros, Cons, and Applications:
It is a powerful tool because it allows users to interact with LLM using new symbolic usages other than natural language. However, it must be carefully designed so that user-defined usages are not confused with the natural language or other usages learned by the LLM. (Too unique usages can be easily confused.)

C: The Output Automater Pattern

(1) Definition of the problem to be handled:
When the execution procedure answered by LLM should be done manually, it should output code to further automate it.

(2) In what context is the problem:
When users work according to the output of LLM, it is easy to make mistakes if they do it manually. For example, even if the LLM provides Python setup instructions, it is difficult to do it manually every time.

(3) Ideas for creating prompts:

Specify the situations that should be automated and specify the format of the output.

(4 ) Example of a prompt:
Here is an example of a prompt that automates the entire operation of outputting Python code.
"From now on, whenever you generate code that spans more than one file, generate a Python script that can be run to automatically create the specified files or make changes to existing files to insert the generated code."
This way, you can type commands in the terminal or have LLM output scripts to manipulate the file structure. If you can have LLM output scripts that allow you to type commands or manipulate file configurations in the terminal, it will be a powerful help in improving your work efficiency.

(5) Advantages, Disadvantages, and Application Methods:
You need to convey enough information to LLM in conversation to get an accurate answer. Also, at this time, it is best used by users who understand the automated procedure, as it is risky to execute it as is.

D: The Flipped Interaction Pattern

(1) Definition of the problem to be handled:
A form of questioning in which the LLM side asks the user to obtain information necessary for a certain task, rather than the user asking the question.

(2) In what context is the problem:
In solving the task, we aim to efficiently use the vast amount of information that LLM has inside through a form of interaction.

(3) Ideas for prompt creation:

To begin with, instruct the LLM to ask questions on the user's part to achieve the objective. Then, it is important to determine how long the question exchange should continue. To improve the usability of the question exchange, it is also important to specify the number of questions per session, the format of the answers, and the order of the questions.

(4) Specific example of the prompt:
"From now on, I would like you to ask me questions to deploy a Python application to AWS. When you have enough information to deploy the application, create a Python script to automate the deployment.

(5) Pros, Cons, and Applications:
It is difficult to know when to terminate an exchange. It is better to tell LLM in advance about specific requirements that are known in advance. In other words, it is important to give specific commands. It is also a good idea to inform the LLM of the user's level of knowledge so that the interaction can be tailored to the user's level of knowledge.

E: The Persona Pattern

(1) Definition of the problem to be handled:
Communicating with LLMs by giving them a specific persona.

(2) In what context is the problem:
There may be times when it is not clear what specific outputs should be envisioned in the LLM for a given issue. In such cases, it is better to establish a specific persona to interact with, and this is a useful format to do so.

(3) Ideas for creating prompts:

Set up various personas, such as occupations, titles, fictional characters, historical figures, and so on. Next, describe in what form you would like the persona to produce the output.

(4) Specific examples of prompts:
"From now on, act as a security reviewer. Pay close attention to the security details of any code that we look at. Provide outputs that a security reviewer would regard the code."
"You are going to pretend to be a Linux terminal for a computer that has been compromised by an attack. When I type in a command, you are going to output the corresponding text that the Linux terminal would produce.
As in this example, it is also possible to have the user pretend to be a non-human entity.

(5) Advantages, Disadvantages, and Applications:
Assuming a fictitious entity will naturally result in a fictitious output. This corresponds to "hallucinations" (LLM outputs something that is not true).

F: The Question Refinement Pattern

(1) Definition of the problem to be addressed:
Refine the question.

(2) In what context is the question:
Questions asked by non-experts may not be precise. It is very useful to modify the question itself to elicit a more accurate answer when the user is not familiar with the field. When a user asks a question of an LLM, asking the right question from the beginning, rather than presenting the LLM with an unpolished prompt and asking the LLM to digest the meaning, may also result in a more accurate answer.

(3) Ideas for prompt creation:

It is important to let them consider what context they are in when modifying the questions.

(4) Example prompt:
"From now on, whenever I ask a question about a software artifact's security, suggest a better version of, suggest a better version of the question to use that incorporates information specific to security risks in the language or framework that I am using instead and ask me if I would like to use your question instead. I am using instead and ask me if I would like to use your question instead.

(5) Pros, Cons, and Applications:
While refining the questions can fill the knowledge gap between the user and the LLM, there is a risk of narrowing the scope of the user's thinking.

G: The Alternative Approaches Pattern

(1) Definition of the problem to be addressed:
To propose an alternative approach to solving a certain problem that is overlooked by the user.

(2) In what context is the problem:
We aim to eliminate users' cognitive biases in our approach to problem-solving.

(3) Ideas for creating prompts:

To begin, limit the scope of the problem so that they can indicate a feasible level of alternatives. Then, by having them compare the pros and cons of the other options, they can understand why you have presented them the way you have.

(4) Example prompt:
"Whenever I ask you to deploy an application to a specific cloud service, if there are alternative services to accomplish the same thing with the same cloud service provider, list the best alternative services and then compare/contrast the pros and cons of each approach with Then ask me which approach I would like to proceed with . ." (quoted from the text)

(5) Pros, Cons, and Applications:
This pattern is versatile and will work in a variety of situations.

H: The Cognitive Verifier Pattern

(1) Definition of the problem to be addressed:
To improve the accuracy of the answer to the original question by subdividing the question
.

(2) In what context is the problem:
Users may ask questions with insufficient details because they lack knowledge in the area or find the prompts cumbersome, which can be an effective prompting pattern.

(3) Ideas for creating prompts:

Instruct the user to ask additional questions to ensure that the first given question is answered correctly. Then ask them to merge the answers to the individual questions to generate an answer to the first question.

(4) Example of a prompt:
"When I ask you a question, generate three additional questions that would help you give a more accurate answer. When I have answered the three questions, combine the answers to produce the final answers to my original question.

(5) Advantages, Disadvantages, and Applications:
The number of additional questions to be added to the LLM can be decided by the LLM or by the user. The larger the number of questions, the more accurate the answers will be, although it is more time-consuming.

I: The Fact Check List Pattern

(1) Definition of the problem to be handled:
To have the LLM output in list form, showing what facts the LLM output is generated based on.

(2) In what context is the problem:
Current LLM outputs false information as if it were true, but this method clarifies what the user should check.

(3) Ideas for creating prompts:

It is a good idea to place the list of facts at the end of the main statement of the output. That way, even users who do not understand the terminology will be able to understand what is in the checklist by reading the main statement first.

(4) Specific examples of prompts:
"From now on, when you generate an answer, create a set of facts that the answer depends on that should be fact-checked and list Only include facts related to cybersecurity." (Adapted from the text. )

(5) Pros, Cons, and Applications:
It is useful because it allows users to fact-check the output.

J: The Template Pattern

(1) Definition of the problem to be handled:
Specify the format of the output.

(2) In what context is the problem:
This pattern is used when you want to have output in a format that LLM does not know.

(3) Ideas for creating prompts:

First, present the format of the output, then specify the blanks you want LLM to complete. Also, tell them not to change the specified format without your permission.

(4) Specific examples of prompts:
"I am going to provide a template for your out- put. Everything in all caps is a placeholder. Please preserve the formatting and overall template that I provide at https://myapi.com/NAME/ profile/JOB" (taken from the text )

(5) Advantages, Disadvantages, and Applications:
While the output format can be aligned, it may be difficult to use with other prompt patterns simultaneously because information other than the specified format is excluded from the output
.

K: The Infinite Generation Pattern

(1) Definition of the problem to be handled:
A way to avoid entering similar prompts.

(2) In what context is the problem:
Typing similar prompts over and over again is tedious and can lead to mistakes.

(3) Ideas for creating prompts:

To begin, the user specifies to LLM that it should output an unlimited number of words. However, a certain limit is placed on the number of words that can be output simultaneously. The user should also decide in advance how to handle user input during the infinite output period. Also, determine the rules for stopping the output.

(4) Example prompt:
"From now on, I want you to generate a name and job until I say stop. I am going to provide a template for your output. Any time that you generate text, try to fit it into one of the placeholders that I list. Please preserve the formatting and overall template that I provide: https://myapi.com/NAME/profile/JOB" (quoted from the text)

(5) Advantages, Disadvantages, and Applications:
One problem is that the current LLM has a limit to the extent to which previous information can be retained. Therefore, it would be necessary to monitor the output and provide feedback as needed.

L: The Visualization Generator Pattern

(1) Definition of the problem to be handled:
To create input text for the image generation tool.

(2) In what context is the problem:
Since LLM cannot directly generate images, there will be times when you want LLM to generate images and output for chart creation.

(3) Ideas for creating prompts:

Ask the LLM to create an output X for visualization with an external tool Y. It is also useful to present several candidates of available visualization tools and let LLM choose among them.

(4) Specific examples of prompts:
"Whenever I ask you to visualize something, please create a Graphviz Dot file or a DALL-E prompt that I can use to create Choose the appropriate tools based on what needs to be visualized.

(5) Pros, Cons, and Applications:
This prompt pattern can be used to extend the expressive capabilities of LLM output to the visual domain.

M: The Game Play Pattern

(1) Definition of the problem to be handled:
To automatically generate game content.

(2) What is the problem in what context:
This is a way to address the problem that manually creating game content is time-consuming and labor-intensive.

(3) Ideas for prompt creation:

The rules of the game are specified by the user, but the content itself can be left to the LLM, which can receive highly expressive natural language as input, leading to the development of games with a different feel and interface from the previous game experience.

(4) Example prompt:
"We are going to play a cybersecurity game. You are going to pretend to be a Linux terminal for a computer that has been compromised by an attacker. When I type in a command, you are going to output the corresponding text that the Linux terminal would produce. The attack should have done one or more of the following things: (1) launched new processes, ( (2) changed files, (3) opened new ports to receive communication, (4) created new outbound connections, (5) changed passwords, (6) created new user To start the game, print a scenario of what happened that led to my investigation and make the description To start the game, print a scenario of what happened that led to my investigation and make the description.
This is an example of a cyber security game that simulates a cyber attack on Linux. By using the Persona pattern, the game is effective and innovative.

(5) Advantages, Disadvantages, and Applications:
As shown in the example above, it is recommended to combine it with the various patterns (Persona, Infinite Generation, Visualization Generator) that have been introduced so far.

N: The Reflection Pattern

(1) Definition of the problem to be handled:
To output the logical background that led to the output.

(2) In what context is the question:
It is important to understand what process, on what basis, and under what assumptions the LLM outputs its response, as it will make debugging the prompt easier.

(3) Ideas for creating prompts:

Ask them to also explain the rationale and assumptions behind their answers. Then, tell them that it is for the user's reference when asking questions, and they will find the explanation useful for their understanding.

(4) Example prompt:
"When you provide an answer, please explain the reasoning and assumptions behind your selection of software frameworks. possible, use specific examples or evidence with associated code samples to support your answer of why the framework is the best selection for the task. Moreover, please address any potential ambiguities or limitations in your answer, to provide a more complete and accurate response. ." (quoted from the text)

(5) Advantages, Disadvantages, and Application Methods:
There are cases where the content given by the user as a logical basis is beyond the scope of understanding for the user, but in such cases, it should be handled by combining it with the Fact Check List pattern and so on.

O: The Refusal Breaker Pattern

(1) Definition of the problem to be addressed:
Create a prompt for a different way of saying when the LLM refuses to answer.

(2) In what context is the problem:
LLMs sometimes refuse to answer questions when they lack the knowledge necessary to answer or when they do not understand the intent of the question, which can cause frustration for the user. This is a pattern to find another way to ask a question that the LLM can answer.

(3) Ideas for creating prompts:

Ask the LLM to provide a reason when he/she is unable to respond. Perhaps it is due to assumptions, constraints, or misunderstandings. Examining the reasons for the inability to answer can provide hints for alternative wording. You may also want to ask them to provide alternatives to the question-wording.

(4) Specific examples of prompts:
"Whenever you can't answer a question, explain why and provide one or more alternate wordings of the question.
and then
User: What is the meaning of life?
ChatGPT: As an AI The meaning of life is a complex philosophical question that has been pondered by humans It may be more productive to rephrase the question in a way that can be answered by information and knowledge, such as "What are someone's opinions?
It may be more productive to rephrase the question in a way that can be answered by information and knowledge, such as "What are some philosophical perspectives on the meaning of life? common beliefs about the purpose of life?" (adapted from the text )
In this example, LLM has offered its alternative to an abstract question.

(5) Advantages, Disadvantages, and Applications:
This pattern is subject to misuse and must be used ethically.

P: The Context Manager Pattern

(1) Definition of the problem to be addressed:
Narrowing or broadening the context in a conversation with an LLM to adjust the focus of the conversation.

(2) In what context is the problem:
Both LLMs and users may include content in their conversations that is off the central topic of discussion, which may disrupt the flow of the conversation. In such cases, it is important to maintain the coherence of the conversation.

(3) Ideas for creating prompts:

It is important to present a clear list of items to be considered and items to be ignored.

(4) Specific examples of prompts:
"When analyzing the following pieces of code, only consider security aspects." (taken from the text)
" Ignore everything that we have discussed. Start over. "
This is a statement to start over and reset, ignoring all previous conversations.

(5) Advantages, disadvantages, and applications:
When resetting the context, it is necessary to pay attention to the extent to which information is lost.

Q: The Recipe Pattern

(1) Definition of the problem to be handled:
To have the client devise a procedure to achieve an objective from given elements.

(2) In what context is the problem:
This pattern can be used when you know the materials and elements, but do not know what steps to take.

(3) Ideas for creating prompts:

Begin by telling them the problem you want to solve and presenting the necessary intermediate steps as the elements to achieve it. Ask them to use these and supplement them to create a complete procedure. Ask them to tell you if there are any unnecessary steps.

(4) Example prompt:
"I am trying to deploy an application to the cloud. I know that I need to install the necessary dependencies on a virtual machine for my application. I know that I need to install the necessary dependencies on a virtual machine for my application. I know that I need to sign up for an AWS account. Please identify any unnecessary steps.

(5) Pros, Cons, and Applications:
Note that if the intermediate step specified by the user is incorrect, the LLM answer may be drawn on it.

summary

In this issue, rather than presenting a specific prompt template, we introduced a series of papers that summarize the ideas and mindsets behind the creation of prompts. There were several patterns, and by combining them effectively, we may be able to discover new possibilities for LLM. We hope you will give them a try.

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us