Catch up on the latest AI articles

Seven Design Principles For Generative AI Applications

Seven Design Principles For Generative AI Applications

Human-Computer Interaction

3 main points
✔️ Generative AI is being used in a variety of fields and must be safe and effective, minimizing potential harm.
✔️ We have developed seven design principles for generative AI applications.

✔️ We hope they will help designers and developers build safe and effective generative AI applications.

Toward General Design Principles for Generative AI Applications
written by Justin D. WeiszMichael MullerJessica HeStephanie Houde
(Submitted on 13 Jan 2023)
Comments: 16 pages, 1 figure. Submitted to the 4th Workshop on Human-AI Co-Creation with Generative Models (HAI-GEN) at IUI 2023

Subjects: Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Computers and Society (cs.CY)


The images used in this article are from the paper, the introductory slides, or were created based on them.


Toward "Design Principles for Generative AI Applications," this paper provides guidance for developing safe and effective applications as generative AI technology rapidly evolves.

The guidance is based on insights from recent research and practice and focuses on improving the user experience and proper handling of data. The authors identify key principles for developing generative AI applications, from data collection to model selection, training, and privacy and security considerations.

We hope this will help designers and developers build safe and effective generative AI applications.

Overview of the Seven Design Principles

In this paper we have developed seven design principles for generative AI applications. The principles are based on recent research and focus on HCI (human-computer interaction) and AI communities, particularly the human-AI co-creation process.

Of the seven principles for designing a generative AI system, six are shown within mutually overlapping circles, indicating their relationship to each other. However, one of the independent principles addresses the output, misuse, or other detrimental effects of generative models. These principles focus specifically on the generative variable environment, where the quantity, quality, nature, and other characteristics of the output of generative AI applications may vary.

What are the seven design principles?

Generative AI differs from other AI systems in that it does not determine something, but rather produces something. For example, it can produce text, images, and many other things. Also, while other AI systems aim to return the same output for the same input, generative AI may produce something different each time. Therefore, it can be difficult to reproduce the same result. This characteristic is important for users to understand when interacting with a generative AI. In the following sections, we will discuss principles designed around the characteristics of generative AI in this environment.

1. design for management of multiple outputs

Managing multiple outputs is an important issue in the design principles of generative AI applications. Because generative AI technologies are probabilistic and produce multiple different outputs for different inputs, strategies are needed to ensure that users can effectively process and leverage these outputs. Specifically, versioning of outputs is important, and users should be able to move between different outputs and return to previous outputs. Curating and annotating outputs is also important and should allow users to organize outputs, annotate specific outputs, and select them as needed. In addition, it is important to visualize the differences between outputs, and tools should be available to help users understand the similarities and differences between different outputs. These strategies will help users effectively utilize the generative AI application and maximize its potential.

2. design for imperfection

An important design principle for generative AI applications is to consider imperfections in the output. That is, it is important that users understand that output is not perfect. Generative AI applications sometimes fail to deliver perfect output as expected by the user. The output may contain defects, such as image errors or code glitches. They may also fail to provide adequate responses to prompts. To handle these incomplete outputs, the following strategies can help Generate multiple outputs and allow the user to select one that is satisfactory. It is also important to evaluate the quality of the output and conduct human review if necessary. It is also helpful to allow users to edit the generated outputs and create final deliverables. Additionally, it is important to provide a sandbox environment that minimizes the impact of user manipulation of the output. Using these strategies will allow users to work with incomplete output and achieve satisfactory results.

3. design for human control

An important design principle for generative AI applications is to allow human control. When users can control the AI system, they will be more efficient and better able to understand the results. There are three types of controls in a generative AI application

1. general controls: allow the user to control the number and variety of outputs produced. For example, the user can change the temperature to adjust the variety of outputs.
2. Technology Specific Controls: Depends on the AI technology being used. Users can use sliders to control attributes and properties.
3. Domain-specific controls: Depends on the type of artifact being generated by the user. For example, in chemistry, the user can specify the properties of a molecule.

These controls allow users to customize the output of AI applications to their own needs.

4. design for search

An important design principle for generative AI applications is to provide multiple outputs to facilitate user exploration of the various options and to provide an environment that is easily controlled by the user. Depending on the particular technical architecture, there are different ways for the user to control it, which facilitates exploration. Visualizing the explored space also helps the user understand the range of artifacts that may be generated.

5. mental model design

As part of the design principles for generative AI applications, it is important to design mental models so that users have an accurate understanding of the system's behavior and role. A mental model is a hypothesis or model in the mind that a person has to understand and predict the external world and surrounding circumstances. In other words, they are the internal concepts and frameworks of thought that people have that help them perceive, understand, and predict the external world and events. For example, our daily use of rules of the road, how we drive a car, or how we operate technological devices such as cell phones and computers are part of a mental model. Users form mental models of how the system works, how outputs are produced, and how the system will meet their needs. Therefore, application designers need to focus on clearly communicating the application's behavior and role so that users can form accurate mental models.

6. design for user understanding and trust

In designing a generative AI application, it is important to ensure that users understand and trust the system. Users should be provided with explanations that help them understand the functionality and limitations of the application and provide ways to deal with incomplete output. Recent research has examined ways to leverage explanations to help users understand exactly how the model works. For example, visualizing what transformations the model is performing can help users better understand the behavior of the model.

7. design against hazards

In designing generative AI applications, potential hazards must be addressed. These hazards include discrimination, disclosure of personal information, and the spread of misinformation. Designers need to understand these risks and consider possible countermeasures. Specifically, they need to understand the risks of output, consider the potential for abuse, and look for ways to enhance rather than replace people's labor. This will lead to the development of safer and more beneficial generative AI applications.


The design of a generative AI application should be flexible to meet user objectives and mitigate potential risks. Users have a wide range of objectives and require appropriate features and controls accordingly. Mitigating potential harm also requires that risks be considered from the design stage and that appropriate controls and explanations be provided. Taking a value-oriented approach will help ensure safety and reliability while meeting user needs.

The future of generative AI applications will focus on improving ethical considerations and transparency, promoting diversity and inclusion, improving sustainability and energy efficiency, and fostering collaboration and co-creation, through which technological innovation is expected to contribute to the overall benefit of society.


If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us