Catch up on the latest AI articles

Improving The Quality Of Google Ads: Content Moderation With LLM

Improving The Quality Of Google Ads: Content Moderation With LLM

Large Language Models

3 main points
✔️ Proposal of an advanced content moderation methodology: a scalable end-to-end solution using a large-scale language model to efficiently perform content moderation for Google ads.
✔️ Efficient Detection of Ad Policy Violations: Outperforms previous models in detecting "non-family safety" policy violations by quickly identifying policy violations from large numbers of ad images with high accuracy.
✔️ Expansion of the technology's range of applications and future prospects: extendable not only to images but also to various modalities and ad policies, such as video, text, landing pages, etc. Continuous optimization improves the accuracy and efficiency of content moderation.

Scaling Up LLM Reviews for Google Ads Content Moderation
written by Wei QiaoTushar DograOtilia StretcuYu-Han LyuTiantian FangDongjin KwonChun-Ta LuEnming LuoYuan WangChih-Chun ChiaAriel FuxmanFangzhou WangRanjay KrishnaMehmet Tek
(Submitted on 7 Feb 2024)
Comments: Published on arxiv.
Subjects: Information Retrieval (cs.IR); Computation and Language (cs.CL); Machine Learning (cs.LG)

code:  

The images used in this article are from the paper, the introductory slides, or were created based on them.

Summary

The paper introduces an end-to-end solution that uses large-scale language models and can be scaled up to improve content moderation for Google ads. The paper also provides background on the use of large-scale language models and the scale of review in computational resources. It then describes a solution to this challenge and reports results on a platform that applies the Google Ads policy. It also discusses future improvements and scalability.

The main goal of this paper is to detect Google Ads policy violations in all ad traffic with high accuracy before the ads are auctioned for delivery. Although the technique is first applied only to image ads, the approach is scalable and can be applied to any modality or ad format. In other words, we are not only referring to large-scale language models, but also large-scale visual language models.

In addition, modeling the entire image ad traffic with large-scale language models is impractical due to the large amount of computational resources required. Collecting annotation data for fine-tuning and training small-scale models is also costly due to the limited bandwidth of human review. Therefore, we are leveraging Google's existing large-scale language models and using prompt engineering and tuning to develop high-quality large-scale language models suitable for ad content moderation, and how to maximize the use of these models with minimal computational resources. In particular, we are testing the effectiveness of this approach for "non-family safety" advertising content policies (restricting sexual suggestion, sexual products, nudity, etc.), a key policy to protect users, advertisers, and the media.

Method

The approach presented in this paper blends funneling of review candidates, labeling with a large-scale language model, label propagation, and a feedback loop. An overview is shown in the figure below.

First, ad traffic is funneled, using various processes such as content and actor similarity, selection based on scores from non-massive language models, deduplication, activity-based filtering, and cluster-based sampling to reduce the amount of content that needs to be processed. This is followed by inference (LLM Labeling) using a large-scale language model with prompt engineering and parameter-efficient tuning. Label propagation (Propagation) uses content similarity-based techniques to increase effectiveness. Finally, from the images labeled by direct LLM Labeling and Label Propagation, candidate images similar to the already labeled images are selected in the next funneling through a feedback loop to the initial funneling step, and the large-scale language model coverage to the entire image ad traffic.

Funneling of review candidates (Funneling) utilizes a variety of heuristics and signals to detect potential policy violations. This phase reduces the amount of content processed by the large-scale language model through filtering and diversified sampling. Content similarity is used to build a similarity graph by propagating labels to similar images based on previously labeled policy violation images. It also takes into account actor similarity by collecting advertising images from either policy violating accounts. In addition, we use a pre-trained non-large-scale language model model model to select candidate images with scores that exceed a given threshold.

For large-scale language model reasoning (LLM Labeling), several strategies are effective in adapting large-scale language models to specific tasks, including prompt engineering and parameter-efficient tuning. In prompt engineering, questions to the large-scale language model are carefully designed. Parameter-efficient tuning, on the other hand, involves fine-tuning on labeled data sets to make the parameter tuning appropriate for the task. In this paper, we leverage the power of in-context learning and combine prompt engineering with parameter-efficient tuning to develop a high-performance, policy-compliant large-scale language model. Prompt engineering and soft prompt tuning are performed manually by policy experts to produce final prompts that are also suitable for production systems.

For Label Propagation and Feedback Loop, labels are propagated from candidates labeled by the large-scale language model to similar images of stored images seen in previous traffic. In this process, the images labeled by the selected large-scale language model are stored as known images, and labels are assigned to new images that are similar enough to be considered approximate duplicates. All images labeled directly or indirectly by the large-scale language model are loaded at the review candidate selection stage and used as initial known images as an extension based on content similarity, and images similar enough to be potential candidates in the next round of review by the large-scale language model are identified.

Result

In this paper, we apply this technique to 400 million advertising images collected over the past 30 days. The process begins by using funneling techniques to significantly narrow down the target images to less than 0.1%, specifically 400,000 images. All of these images undergo a precise review by a large-scale language model. After undergoing label propagation, the number of ads that received positive ratings doubled, indicating that this approach allowed labeling to be performed on nearly twice as many images as when using a traditional multimodal, non-large-scale language model. Notably, the method outperformed the traditional model in terms of accuracy with respect to "non-family-safety" ad policies. Overall, this approach contributed to a reduction of more than 15% in image ads that violated the relevant policies.

They are currently expanding this approach beyond images to include other ad content such as video, text, and landing pages, as well as a wider variety of ad policies. They are also working to improve the overall quality of the pipeline by improving the funneling process, further tuning large language model prompts, and more effectively propagating similarity through the use of high-quality embedding. Further improvements in the accuracy and efficiency of content moderation are expected.

Summary

Large-scale language models are very effective tools in content moderation, but high inference cost and latency are challenges in situations such as Google Ads, where large amounts of data are handled. In this paper, we propose a way to use large-scale language models to efficiently scale content moderation for Google Ads.

Specifically, the method employs filtering and de-duplication to create clusters of advertisements, select representative advertisements from each cluster, and review only those representative advertisements using a large-scale language model. Furthermore, by applying the large-scale language model judgments of the representative ads to the entire cluster, the number of ads to be reviewed is significantly reduced and the recall rate is doubled compared to traditional non-large-scale language models. The success of this approach relies heavily on the data representation methods used for clustering and label propagation, specifically revealing that cross-modal similarity representations yield better results than single-mode representations.

This approach will enhance the identification and elimination of inappropriate advertising in the future and improve the user experience. In addition to advertising, the technique could be used in diverse areas such as news article verification, social media monitoring, and evaluation of educational materials. The results of this paper can serve as a catalyst not only for technical improvements, but also for broader discussions that include social implications and ethical considerations.

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!
Takumu avatar
I have worked as a Project Manager/Product Manager and Researcher at internet advertising companies (DSP, DMP, etc.) and machine learning startups. Currently, I am a Product Manager for new business at an IT company. I also plan services utilizing data and machine learning, and conduct seminars related to machine learning and mathematics.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us